I am evaluating Terracotta for my current problem statement This process is CPU intensive and approximately 5-10 GB Mering Memory (RAM) takes place. Each object in the memory is a 1 kilobyte fine and contains a handful of primitive data types. The whole RAM data goes through thousands of iterations and each repetition changes all the objects, each object is completely modified. This process takes several days to end.
Million + objects are split and now run on many core machines, but I need more power and more RAM (for big trouble). Data / objects processed by a thread is not shared with others
Will Terracotta be a Good Solution? Would syncing of millions of items in the clustering server really make bad barriers ineffective? "post-text" itemprop = "text">
I think Terracotta is most suitable for caching and fast retrieval. As a pull rate, I have seen 10K "Batched Points" on the Cache Server instance per second. I "Batch Update" mode means that you can create a collection of entries in a shot that is more efficient vs single point Will go
Here is an example of a batch update:
cache.setNodeBulkLoadEnabled (true); Try {collection & lt; Element & gt; Entries = new arreelist & lt; Element & gt; (); While (...) {entries.add (new element (key, value)); } Cache.putAll (entries); } {Cache.setNodeBulkLoadEnabled (false) at the end; } In addition to this, Terracotta has a large memory facility which is capable of using memory outside of JVM pile. To enable it, you must add it to your ehcache.xml:
& lt; Cache name = "com.xyz.MyPOJO" maxMemoryOffHeap = "3g" & gt; & Lt; Terracotta / & gt; & Lt; / Cache & gt; The above examples will use 3G's of RAM outside of your JVM. Normally you should not be bigger than 4G, otherwise your GVM will spend a lot of cycles on GC ... which makes your calculations even slower in your case.
Another option for checking "computing / data grid" solutions can start with you and
Comments
Post a Comment