DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Related

  • Implement a Distributed Database to Your Java Application
  • Efficient Asynchronous Processing Using CyclicBarrier and CompletableFuture in Java
  • Advanced Brain-Computer Interfaces With Java
  • Optimizing Java Applications: Parallel Processing and Result Aggregation Techniques

Trending

  • AI Agents: A New Era for Integration Professionals
  • The Evolution of Scalable and Resilient Container Infrastructure
  • Infrastructure as Code (IaC) Beyond the Basics
  • Driving DevOps With Smart, Scalable Testing
  1. DZone
  2. Coding
  3. Java
  4. Distributed Computing Simplified

Distributed Computing Simplified

This tutorial addresses two main advantages of distributed computing and offers three solutions to simplify, dependent on the use case.

By 
Fawaz Ghali, PhD user avatar
Fawaz Ghali, PhD
DZone Core CORE ·
Jan. 24, 24 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
7.3K Views

Join the DZone community and get the full member experience.

Join For Free

Distributed computing, also known as distributed processing, involves connecting numerous computer servers through a network to form a cluster. This cluster, known as a "distributed system," enables the sharing of data and coordinating processing power. Distributed computing provides many benefits, such as:

  • Scalability utilizing a "scale-out architecture"
  • Enhanced performance leveraging parallelism
  • Increased resilience by employing redundancy
  • Cost-effectiveness by utilizing low-cost, commodity hardware

There are two main advantages of distributed computing: 

  1. Utilizing the collective processing capabilities of a clustered system
  2. Minimizing network hops by conducting computations on the cluster processing the data

In this blog post, we explore how Hazelcast simplifies distributed computing (both as a self-managed and managed service). Hazelcast offers three solutions for distributed computing, depending on the use case:

Option #1: Entry Processor

An entry processor is a functionality that executes your code on a map entry in a manner that ensures atomicity. So you can update, remove, and read map entries on cluster members (servers). This is a good option to perform bulk processing on an IMap.

How To Create EntryProcessor

Java
 
public class IncrementingEntryProcessor implements EntryProcessor<Integer, Integer, Integer> {
    public Integer process( Map.Entry<Integer, Integer> entry ) {
        Integer value = entry.getValue();
        entry.setValue( value + 1 );
        return value + 1;
    }
    @Override
    public EntryProcessor<Integer, Integer, Integer> getBackupProcessor() {
        return IncrementingEntryProcessor.this;
    }
}


How To Use EntryProcessor

Java
 
IMap<Integer, Integer> map = hazelcastInstance.getMap( "myMap" );
for ( int i = 0; i < 100; i++ ) {
    map.put( i, i );
}
Map<Integer, Object> res = map.executeOnEntries( new IncrementingEntryProcessor() );


How To Optimize EntryProcessor Performance

  • Offloadable refers to the capability of transferring execution from the partition thread to an executor thread.
  • ReadOnly indicates the ability to refrain from acquiring a lock on the key.

You can learn more about the entry processor in our documentation.

Option #2: Java Executor Service

Simply put, you can run your Java code on cluster members and obtain the resulting output. Java boasts a standout feature in its Executor framework, enabling the asynchronous execution of tasks, such as database queries, intricate calculations, and image rendering. In the Java Executor framework, you implement tasks in two ways: Callable or Runnable. Similarly, using Hazelcast, you can implement tasks in two ways: Callable or Runnable.

How To Implement a Callable Task

Java
 
public class SumTask

        implements Callable<Integer>, Serializable, HazelcastInstanceAware {
    private transient HazelcastInstance hazelcastInstance;
    public void setHazelcastInstance( HazelcastInstance hazelcastInstance ) {
        this.hazelcastInstance = hazelcastInstance;
    }
    public Integer call() throws Exception {
        IMap<String, Integer> map = hazelcastInstance.getMap( "map" );
        int result = 0;
        for ( String key : map.localKeySet() ) {
            System.out.println( "Calculating for key: " + key );
           result += map.get( key );
        }
        System.out.println( "Local Result: " + result );
       return result;
    }
}


How To Implement a Runnable Task

Java
 
public class EchoTask implements Runnable, Serializable {
    private final String msg;
    public EchoTask( String msg ) {
        this.msg = msg;
    }
    @Override
    public void run() {
        try {
            Thread.sleep( 5000 );
        } catch ( InterruptedException e ) {

        }
        System.out.println( "echo:" + msg );
    }
}


How To Scale the Executor Service

To scale up, you should improve the processing capacity of the cluster member (JVM). You can do this by increasing the pool-size property mentioned in Configuring Executor Service (i.e., increasing the thread count). However, please be aware of your member’s capacity. If you think it cannot handle an additional load caused by increasing the thread count, you may want to consider improving the member’s resources (CPU, memory, etc.). For example, set the pool size to 5 and run the above MasterMember. You will see that EchoTask is run as soon as it is produced.  

To scale out, add more members instead of increasing only one member’s capacity. You may want to expand your cluster by adding more physical or virtual machines. For example, in the EchoTask scenario in the Runnable section, you can create another Hazelcast instance that automatically gets involved in the executions started in MasterMember and start processing. You can read more about the Executor Service in our documentation. 

Option #3: Pipeline

Develop a data pipeline capable of swiftly executing batch or streaming processes across cluster participants. It consists of three elements: one or more sources, processing stages, and at least one sink. Depending on the data source, pipelines can be applied for various use cases, such as stream processing on a continuous stream of data (i.e., events) to provide results in real-time as the data is generated, or, in the form of batch processing of a fixed amount of static data for routine tasks, such as generating daily reports. In Hazelcast, pipelines can be defined using either SQL or the Jet API. 

Below are a few more resources:

  • SQL pipelines
  • Jet API pipelines
  • Pipelines to enrich streams demo 

When Should You Choose Entry Processor, Executor Service, or Pipeline?

Opting for an entry processor is a suitable choice when conducting bulk processing on a map. Typically, this involves iterating through a loop of keys, retrieving the value with map.get(key), modifying the value, and ultimately reintegrating the entry into the map using map.put(key, value).

The executor service is well-suited for executing arbitrary Java code on your cluster members.

Pipelines are well-suited for scenarios where you need to process multiple entries (i.e., aggregations or joins) or when multiple computing steps need to be performed in parallel. With pipelines, you can update maps based on the results of your computation using an entry processor sink.

So here you have three options for distributed computing using Hazelcast. We look forward to your feedback and comments about this blog post! Share your experience on the Hazelcast GitHub repository.

Hazelcast cluster Java (programming language) Processing Distributed Computing

Opinions expressed by DZone contributors are their own.

Related

  • Implement a Distributed Database to Your Java Application
  • Efficient Asynchronous Processing Using CyclicBarrier and CompletableFuture in Java
  • Advanced Brain-Computer Interfaces With Java
  • Optimizing Java Applications: Parallel Processing and Result Aggregation Techniques

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: