Is the Jasypt project dead?

I’m posting this because I don’t know where else to ask…. is the Jasypt project dead? I ask this because I have no idea where to ask questions or get community support for this Java JCE encryption library. The mailing lists seem dormant, the “forums” do not load and the issue tracker seems pretty idle….

Is anyone using this library anymore, does a support community still exist?

UPDATE: (from the project maintainer) https://sourceforge.net/p/jasypt/feature-requests/25/

Jasypt is DEFINITELY alive! Last version was released on February 25th, 2014.

Both the mailing lists and the forums were removed because of very low traffic, in favour of the SourceForge tracking system you are now using.

As for users… figures indicate almost 60,000 downloads in July 2014.

Generating Java classes for the Azure AD Graph API

NOTE: I’ve since abandoned this avenue to generate pojos for the GraphAPI service. The Restlet Generator simply has too many issues in the resulting output (i.e. not handling package names properly, generics issues, not dealing with Edm.[types] etc). However this still may be of use to someone who wants to explore it further

Recently had to write some code to talk to the Azure AD Graph API. This is a REST based API that exchanges data via typical JSON payloads. For those having to a Java client to talk to this, a good starting point is taking a look at this sample API application to get your feet wet. However to those familiar in Java, this code is less that desirable;  however it has no dependencies which is nice.

When coding something against a REST service its often nice to have a set of classes that you can marshal to/from the JSON payloads to you are interacting with.  Behind the scenes it appears that this Azure Graph API is an OData app, which does present “$metadata” about itself… cool! So now we can generate some classes…..

https://graph.windows.net/YOUR_TENANT_DOMAIN/$metadata

OR

https://graphregistry.cloudapp.net/GraphRegistry.svc/YOUR_TENANT_DOMAIN/$metadata

So what can we use to generate some Java classes against this? Lets use the Restlet OData Extension. This Restlet extension can generate code off OData schema documents.  You will want to follow these instructions as well for the code generation.

IMPORTANT: You will also need a fork/version of Restlet that incorporates this pull request fix for a NullPointer that you will encounter with the code generation. (The bug exists in Restlet 2.2.1)

The command I ran to generate the Java classes for all the Graph API entities was as follows (run this from WITHIN the lib/ directory in the extracted Restlet zip/tarball you downloaded). In the command below, do NOT specify “$metadata” in the Generator URI as the tool appends that automatically.

java -cp org.restlet.jar:org.restlet.ext.xml.jar:org.restlet.ext.atom.jar:org.restlet.ext.freemarker.jar:org.restlet.ext.odata.jar:org.freemarker_2.3/org.freemarker.jar org.restlet.ext.odata.Generator https://graph.windows.net/YOUR_TENANT_DOMAIN/ ~/path/to/your/target/output/dir

Pitfalls:

  •  If you run the command and get the error “Content is not allowed in prolog.” (which I did). There might be some extra characters that are being prepended to the starting “<edmx:Edmx”  in the “$metadata” schema document that this endpoint is returning. If this is the case do the following:
    • Download the source XML $metadata document to your hard-drive
    • Open it up in a editor and REMOVE the “<?xml version=”1.0″ encoding=”utf-8″?>” that precedes the first “<edmx…” element
    • Next, just to ensure there are no more hidden chars at the start of the document, open up a hex editor on the document to get rid of any other hidden chars that precede the first “<edmx…”  element.
    • Save the changes locally, and fire-up a local webserver (or a webserver that lives anywhere) and set it up so that http://yourwebserver/whatever/$metadata will serve up that XML file.
    • Then proceed to alter the Restlet Generator command above to reference this modified URI as appropriate. Remember that you do NOT specify “$metadata” in the Generator URI as the tool appends that automatically.

Logstash: Failed to flush outgoing items UndefinedConversionError workaround

If you have ever seen an error similar to this in Logstash it can be frustrating and can take your whole pipeline down (blocks). It appears that there are some outstanding tickets on this, one of which is here. This error can occur if you have an upstream input where the charset is defined as US-ASCII (such as from an ASCII file input), where that file contains some extended chars, and then being sent to an output where it needs to be converted (in my case it was elasticsearch_http). This was w/ Logstash 1.4.1

:message=>"Failed to flush outgoing items", :outgoing_count=>1,
:exception=>#<Encoding::UndefinedConversionError: ""\xC2"" from ASCII-8BIT to UTF-8>

For anyone else out there, here is a simple workaround fix I put in my filters which pre-processes messages from “known” upstream ASCII inputs. This fixed it for me: (using the Ruby code filter)

ruby {
code => "begin; if !event['message'].nil?; event['message'] = event['message'].force_encoding('ASCII-8BIT').encode('UTF-8', :invalid => :replace, :undef => :replace, :replace => '?'); end; rescue; end;"
}

To create a little log file to reproduce this try:

perl -e 'print "\xc2\xa0\n"' > test.log

Encrypting Logstash data

Logstash can consume a myriad of data from many different sources and likewise send this to all sorts of different outputs across your network. However the question that sometimes comes up is:  “how sensitive is this data?”

The answer to that question will vary as wildly as your inputs, however if it is sensitive in nature, you need to secure it at rest and in transit. There are many ways to do this and they can vary on the combination of your inputs/outputs and what protocols are supported and whether or not they use TLS or some other form of crypto. Your concerns might also be mitigated if you are transferring this data over a secured/trusted network, VPN or destination specific ssh tunnel.

However if none of the above apply, or you simply don’t have control over the parts of the infrastructure in between your Logstash agent and your final destination, there is one cool little filter plugin available to you in the Logstash Contrib Plugins project called the cipher filter.

If the only thing you have control over is the ability to touch your config file on both ends you can use this without having to fiddle with other infrastructure to provide a modest level of security for your data.  This is assuming you have a topology where Logstash agents, send data to some queueing system, and then they are consumed by another Logstash system downstream, which subsequently filters the data and sends it to its final resting place. Using the cipher filter comes into play where your “event” field(s) are encrypted at your source agent, flows through the infrastructure, and decrypted by your Logstash processor on the other end.

The cipher filter has been around for some time, but recently I had a use case for it similar to the above so I attempted using it and ran into some problems. In lieu of any answers, I ended up fixing the issues and adding some new features in this pull request. So if you are reading this, and that pull request is not yet approved, you might need to use my fork instead.

IMPORTANT: The security of your “key” is only as secure as the readability of your logstash config file that contains it! If this config file is world readable, your encryption is worthless. When using this method, be sure to analyze how and who could potentially access this configuration file.

Here is a simple Logstash conf example of using the cipher filter for encryption

input {

    file {
        # to test, just starting writing
        # line after line to the log
        # this will become the 'message'
        # in your logstash event
        path => "/path/to/test.log"
        sincedb_path => "./.sincedb"
        start_position => "end"
    }
}

filter {

    cipher {
        algorithm => "aes-256-cbc"
        cipher_padding => 1

        # Use a static "iv"
        #iv => "1234567890123456"

        # OR use a random IV per encryption
        iv_random_length => 16

        key => "12345678901234567890123456789012"
        key_size => 32
        
        mode => "encrypt"
        source => "message"
        target => "message_crypted"
        base64 => true
        
        # the maximum number of times the
        # internal cipher object instance 
        # should be re-used before creating
        # a new one, default to 1
        #
        # On high volume systems bump this up
        max_cipher_reuse => 1
    }

    cipher {
        algorithm => "aes-256-cbc"
        cipher_padding => 1

        # Use a static "iv"
        #iv => "1234567890123456"

        # OR use a random IV per encryption
        iv_random_length => 16

        key => "12345678901234567890123456789012"
        key_size => 32
        
        mode => "decrypt"
        source => "message_crypted"
        target => "message_decrypted"
        base64 => true
        
        # the maximum number of times the
        # internal cipher object instance 
        # should be re-used before creating
        # a new one, default to 1
        #
        # On high volume systems bump this up
        max_cipher_reuse => 1
    }


}


output {

    # Once output to the console you should see three fields in each event
    # The original "message"
    # The "message_crypted"
    # The "message_decrypted" verifying that the decryption worked. 
    # In a real setup, you would only send along an encrypted version to an OUTPUT
    stdout {
        codec => json
    }

}

 

Liferay clustering internals

Ever wondered how Liferay’s internal clustering works? I had to dig into it in context of my other article on globally load balancing Liferay across separate data-centers. This blog post merely serves as a place for my research notes and might be helpful for someone else who is trying to follow what is going on under the covers in Liferay in lieu of any real documentation, design document or code comments (which Liferay seems to have none of)

Current set of mostly unanswered questions at the Liferay community forums: https://www.liferay.com/community/forums/-/message_boards/recent-posts?_19_groupThreadsUserId=17865971

Liferay Cluster events, message types, join/remove

  • Anytime ClusterExecutorImpl.memberJoined/memberRemoved() is invoked all implementations of ClusterEventListener have processClusterEvent() invoked
    • MemorySchedulerClusterEventListener: calls ClusterSchedulerEngine.getMasterAddressString() which can trigger slaveToMaster or masterToSlave @see EXECUTE notes below
    • LuceneHelperImpl.LoadIndexClusterEventListener: @see NOTIFY/JOIN section below
    • LiveUsersClusterEventListenerImpl: @see DEPART/JOIN section below
    • DebuggingClusterEventListenerImpl: logs the event to the log
    • ClusterMasterTokenClusterEventListener: on any event, invokes getMasterAddressString() to determine who the master is, or become so itself if nobody is (controlled via entry in Lock_ table) After this runs it invokes notifyMasterTokenTransitionListeners() which calls ClusterMasterTokenTransitionListener.masterTokenAcquired() or masterTokenReleased()
      • Implementors of ClusterMasterTokenTransitionListener:
        • BackgroundTaskClusterMasterTokenTransitionListener: calls BackgroundTaskLocalServiceUtil.cleanUpBackgroundTasks() which in turn ends up invoking BackgroundTaskLocalServiceImpl -> cleanUpBackgroundTasks() which is annotated with @Clusterable(onMaster=true) gets all tasks with STATUS_IN_PROGRESS and changes their status to FAILED. Then gets the BackgroundTaskExecutor LOCK and UNLOCKs it if this server is the current owner/master for that lock…
    • JGroups
      • Member “removed”: triggers ClusterRequestReceiver.viewAccepted() that in turn invokes ClusterExecutorImpl.memberRemoved() which fires ClusterEventType.DEPART (see below)
    • ClusterMessageType.NOTIFY/UPDATE: These message types are requested from other nodes when a new node comes up. When received they are in-turn translated and re-broadcast locally within a individual node as a ClusterEvent of type JOIN via ClusterExecutorImpl.memberJoined() method. Note this latter class is different in the EE jar, the implementation of this method additionally invokes the EE LicenseManager.checkClusterLicense() method
      • “Live Users” message destination:
        • triggers actions done by LiveUsersMessageListener
        • invokes cluster RPC invocation of LiveUsers.getLocalClusterUsers()
        • response is processed by LiveUsers.addClusterNode
        • proceeds to “sign in” each user that is signed in on the other joined node, appears to do session tracking etc
      • LoadIndexClusterEventListener:
        • Loads a full stream of the lucene index per company id from any node that responds who has a local lucene index last generation long value that is > this node’s index’s value. This is only triggered on a JOIN event when the “control” JChannel’s total number of peer nodes minus one is <= 1, so that this only runs on a node boot and not at other times.
      • EE LicenseManager.checkClusterLicense()

        • Please see the notes below on licensing and verification in regards to clustering on bootup
    • ClusterEventType.DEPART: This event is broadcast locally on a node when it is shutdown
      • LiveUsersClusterEventListenerImpl: listens to local DEPART event and sends message over MessageBus to other cluster members letting them know this node is gone
    • ClusterMessageType.EXECUTE: These cluster messages contain serialized ClusterRequest objects that contain RPC like information on a class, parameters and a method to invoke on other node(s). Generated by ClusterRequest.createMulticastRequest and createUnicastRequest messages. The result of these are typically passed off to ClusterExecutorUtil.execute() methods (which end up calling ClusterExecutorImpl.execute())
    • ClusterRequest.createUnicastRequest() invokers:
      • LuceneHelperImpl: _getBootupClusterNodeObjectValuePair() called by getLoadIndexesInputStreamFromCluster when it needs to connect to another node only in order to load indexes on bootup from another node who’s index is up to date
      • ClusterSchedulerEngine: callMaster() called by getScheduledJob(s)(), initialize() and masterToSlave() only if MEMORY_CLUSTERED is enabled for the job engine. Initialize() calls initMemoryClusteredJobs() if the local node is a slave to get a list of of all MEMORY_CLUSTERED jobs. MEMORY_CLUSTERED jobs mean they only run on whoever the master is which is designated by which node owns the lock named “com.liferay.portal.scheduler.SchedulerEngine” in the Lock_ table in the database. masterToSlave() demotes the current node to a slave. slaveToMaster() gives current node opportunity to acquire lock above to become the scheduler master node
      • ClusterMasterExecutorImpl: used for executing EXECUTE requests against the master only. Leverages method getMasterAddressString() to determine who the master is (via which node owns the Lock_ table entry named “ClusterMasterExecutorImpl”.
    • ClusterRequest.createMulticastRequest() invokers:
      • JarUtil: Instructs other peer nodes to JarUtil.installJar(). Invoked by DataSourceFactoryUtil.initDataSource. Also by DataSourceSwapper.swap*DataSource(). SetupWizardUtil.updateSetup()
      • ClusterLoadingSyncJob: Invoked by EditServerAction.reindex(). This sends a multicast ClusterRequest of EXECUTE for LuceneClusterUtil.loadIndexesFromCluster(nodeThatWasJustReindexedAddress) to all nodes. So all nodes, even those on the other side of the RELAY2 bridge will get this, and then will attempt to connect over the bridge back to stream the “latest” index from the node that was just re-indexed. Note however that nodes across the bridge will output this stack trace in their logs when attempting to connect back to get a token on the re-indexed node (this routine is in _getBootupClusterNodeObjectValuePair() in LuceneHelperImpl and tries to do a unicast ClusterRequest to invoke TransientTokenUtil.createToken(). My guess on this error has something to do with the “source address” which is a jgroups local UUID, being sent to the remote bridge, and on receipt the local list in a different DC does not recognize that multicast address. This “source address” that is sent ClusterExecutorImpl.getLocalClusterNodeAddress(). Also see http://www.jgroups.org/manual/html/user-advanced.html#d0e3353. ClusterRequestReceiver *does* get invoked w/ a property scoped SiteUUID in the format of hostname:sitename, however ClusterRequestReceiver.processClusterRequest() defers to ClusterRequest.getMethodHandler().invoke() which has no knowledge of this originating SiteUUID and only the internal “local” jgroups UUID address that is embedded as an “argument” to loadIndexesFromCluster() in the ClusterRequest, hence this can’t effectively/properly call TransientTokenUtil.createToken() on the sender (and the error below). When it attempts the call via ClusterExecutorImpl (jgroups control channel .send()) using the “address” argument it received in the original inbound payload. We see the message is dropped cause the target address is invalid/unknown..
        • 21:10:30,485 WARN [TransferQueueBundler,LIFERAY-CONTROL-CHANNEL,MyHost-34776][UDP:1380] MyHost-34776: no physical address for 27270628-816b-ddba-872f-1f2d27327f2f, dropping message
          
        • 18:37:40,169 INFO [Incoming-1,shared=liferay-control][LuceneClusterUtil:47] Start loading Lucene index files from cluster node 9c4870e7-17c2-18fb-01c4-5817bb5d1290
          18:37:46,538 WARN [TransferQueueBundler,shared=liferay-control][TCP:1380] null: no physical address for 9c4870e7-17c2-18fb-01c4-5817bb5d1290, dropping message
        • 18:01:56,302 ERROR [Incoming-1,LIFERAY-CONTROL-CHANNEL,mm2][ClusterRequestReceiver:243] Unable to invoke method {arguments=[[J@431bd9ee, 5e794a8e-0b7f-c10d-be17-47ce43b66517], methodKey=com.liferay.portal.search.lucene.cluster.LuceneClusterUtil.loadIndexesFromCluster([J,com.liferay.portal.kernel.cluster.Address)}
          <pre>java.lang.reflect.InvocationTargetException
          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
          at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
          at java.lang.reflect.Method.invoke(Method.java:606)
          at com.liferay.portal.kernel.util.MethodHandler.invoke(MethodHandler.java:61)
          at com.liferay.portal.cluster.ClusterRequestReceiver.processClusterRequest(ClusterRequestReceiver.java:238)
          at com.liferay.portal.cluster.ClusterRequestReceiver.receive(ClusterRequestReceiver.java:88)
          at org.jgroups.JChannel.invokeCallback(JChannel.java:749)
          at org.jgroups.JChannel.up(JChannel.java:710)
          at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1025)
          at org.jgroups.protocols.relay.RELAY2.deliver(RELAY2.java:495)
          at org.jgroups.protocols.relay.RELAY2.up(RELAY2.java:330)
          at org.jgroups.protocols.FORWARD_TO_COORD.up(FORWARD_TO_COORD.java:153)
          at org.jgroups.protocols.RSVP.up(RSVP.java:188)
          at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
          at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
          at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
          at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
          at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
          at org.jgroups.protocols.UNICAST2.up(UNICAST2.java:453)
          at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:763)
          at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:574)
          at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:147)
          at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:187)
          at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
          at org.jgroups.protocols.MERGE3.up(MERGE3.java:290)
          at org.jgroups.protocols.Discovery.up(Discovery.java:359)
          at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
          at org.jgroups.protocols.TP$4.run(TP.java:1181)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
          at java.lang.Thread.run(Thread.java:744)
          Caused by: com.liferay.portal.kernel.exception.SystemException: java.lang.NullPointerException
          at com.liferay.portal.search.lucene.LuceneHelperImpl._getBootupClusterNodeObjectValuePair(LuceneHelperImpl.java:830)
          at com.liferay.portal.search.lucene.LuceneHelperImpl.getLoadIndexesInputStreamFromCluster(LuceneHelperImpl.java:451)
          at com.liferay.portal.search.lucene.LuceneHelperUtil.getLoadIndexesInputStreamFromCluster(LuceneHelperUtil.java:326)
          at com.liferay.portal.search.lucene.cluster.LuceneClusterUtil.loadIndexesFromCluster(LuceneClusterUtil.java:57)
          ... 32 more
          Caused by: java.lang.NullPointerException
          at com.liferay.portal.search.lucene.LuceneHelperImpl._getBootupClusterNodeObjectValuePair(LuceneHelperImpl.java:809)
          ... 35 more
      • PortalImpl: resetCDNHosts() does a ClusterRequest.EXECUTE for the same PortalUtil.resetCDNHosts() method on all peer nodes. To avoid recursive loop uses ThreadLocal to enter the EXECUTE block (relevant for remote nodes)
      • LuceneHelperImpl: _loadIndexFromCluster() (invoked via JOIN event above). Invoked via EXECUTE remote method calls against LuceneHelperUtil.getLastGeneration() against all peers to see who’s lucene indexes generation time is newer up earler than ME, picks that one and then initiate a direct http stream connection to it to fetch the entire index from it …. Who is a “peer” candidate is determined by invoking ClusterExecutorUtil.getClusterNodeAddresses() which is ultimately realized by calling the “control” channel’s getView() method
      • MessageSenderJob: notifyClusterMember() invoked by MessageSenderJob.doExecute() when a job execution context’s next execution time is null. The notifyClusterMember() method invokes SchedulerEngineHelperUtil.delete() via a ClusterRequest.EXECUTE
      • ClusterableAdvice: AOP advice class that intercepts any Liferay method annotatted with @Clusterable, AFTER returning, will then create a ClusterRequest.EXECUTE for the same method passed to all nodes in the cluster. Used by ClusterSchedulerEngine (delete, pause, resume, schedule, suppressError, unschedule, update) BackgroundTaskLocalServiceImpl, and PortletLocalServiceImpl. Has an ‘onMaster’ property that awkwardly controls whether or not the advice will actually be applied or not. If onMaster=false, then the advice fast returns like a no-op. But if onMaster=true then the advice runs (locally on the node if THIS node is the master, or via a ClusterRequest.EXECUTE). WHO is the master is dictated by which nodes owns the LOCK in the database named “com.liferay.portal.scheduler.SchedulerEngine”.
      • EhcacheStreamBootstrapCacheLoader: start() and doLoad() invokes EhcacheStreamBootstrapHelpUtil -> loadCachesFromCluster() broadcasts a ClusterRequest for EXECUTE of the method createServerSocketFromCluster() on all peer nodes, and then attempts to connect to a peer node to load all named caches from the peer node…
  • ClusterExecutorImpl notes:
    • This class has several methods relating to figuring out “who” is in the “cluster” and due to the implementation of these methods you will get inconsistent “views” of who are the local cluster members (if using RELAY2 in JGroups)
      • getClusterNodeAddresses() = consults the “control” JChannel’s receiver’s “view” of the cluster. This will only return local nodes subscribed to this JChannel. These are fetched real-time every time its invoked.
      • getClusterNodes() = consults a local map called “_liveInstances”. Live instances is populated when the “memberJoined()” method is invoked, in response to ClusterRequestReceiver.processClusterRequest() being invoked when a new node comes up. Note that because this is in response to ClusterRequests…. which WILL be transmitted over RELAY2 in JGroups, this will reflect MORE nodes than what getClusterNodeAddresses() returns…..
      • There is also another local member called “_clusterNodeAddresses” which also is populated w/ addresses of other nodes when “memberJoined()” is invoked…. I really don’t understand why this exists, AND getClusterNodeAddresses() exists, AND getClusterNodes() exists when they could potentially return inconsistent data.
  • Quartz/Jobs scheduling
    •  ClusterSchedulerEngine proxies all requests for scheudling jobs through SchedulerEngingProxyBean (BaseProxyBean) which just relays schedule() invocations through Messaging which ends up invoking QuartzSchedulerEngine.schedule()
    • ClusterSchedulerEngine on initialization calls getMasterAddressString() which gets the Lock from the _Lock table named “com.liferay.portal.scheduler.SchedulerEngine” via LockLocalServiceImpl.lock() which is a row based locking in the database this is based on “owner” which is based on the local IP?
    • _doMasterToSlave() – queries current master for list of all MEMORY_CLUSTERED jobs (does this over RMI or Jgroups method invoke of getScheduledJobs), gets this list of job names and unschedules them locally
    • slaveToMaster() – takes everything in _memoryClusteredJobs and schedules it locally. _memoryClusteredJobs seems to just be a global list of the MEMORY_CLUSTERED jobs that are kept track of on each node…
  • Licensing: If running Liferay EE the LicenseManager.checkClusterLicense() -> checkBinaryLicense() is invoked via ClusterExecutorImpl.memberJoined() (note that this is only invoked in the EE version as the class is different in the portal-impl.jar contained in the EE distribution). This particular invocation chain appears to respond to the JOIN event for the peer node by checking if the “total nodes” the node knows about is greater than the total number of nodes the license permits, if it is, the local node gets a System.exit() invoked on it (killing itself). For learning purposes if you are so interested to learn more about this obfuscated process, you can do so by investigating further on your own, its all in the EE portal-impl.jar. Here are some other notes on the licensing and how it relates to clustering and issues one might encounter.
    • LicenseManager. getClusterLicense() invokes ClusterExecutorUtil.getClusterNodes() and does validation to check if you have more nodes in the cluster than your installed license supports (amongst other things).
    • Another thing that happens when ClusterExecutorImpl.memberJoined() is invoked is that ClusterExecutorImpl.getClusterNodes() appears to be invoked (which @see above, in addition to local peer nodes in the same DC, will return nodes that exist across a JGroups RELAY2 bridge). It picks one of these peer nodes and remotely invokes LicenseManager.getLicenseProperties() against it, this is done via a ClusterExecutorUtil.execute(). The return is a Future with a timeout, upon reception the Future has a Map in it containing the license properties, these are then compared agains the local node’s license, if they match we are good to go, otherwise….. read below (a mixing licenses error will occurr) Why is this relvent? Well despite having legit licenses, when using RELAY2, you can come across two erronious errors that will prevent your cluster from booting, as described below.
      • The first error you might see is a timeout error, when it reaches out to a peer node to verify a license in response to a new member joining the cluster (likely a member over a RELAY2 bridge). It will look like the below. Typically just rebooting Liferay takes care of the issue, as explained in the followup bullets:
        • java.util.concurrent.TimeoutException
          at com.liferay.portal.kernel.cluster.FutureClusterResponses.get(FutureClusterResponses.java:88)
          at com.liferay.portal.license.a.a.f(Unknown Source)
          at com.liferay.portal.license.a.a.b(Unknown Source)
          at com.liferay.portal.license.a.f.d(Unknown Source)
          at com.liferay.portal.license.a.f.d(Unknown Source)
          at com.liferay.portal.license.LicenseManager.c(Unknown Source)
          at com.liferay.portal.license.LicenseManager.checkBinaryLicense(Unknown Source)
          at com.liferay.portal.license.LicenseManager.checkClusterLicense(Unknown Source)
          at com.liferay.portal.cluster.ClusterExecutorImpl.memberJoined(ClusterExecutorImpl.java:442)
          at com.liferay.portal.cluster.ClusterRequestReceiver.processClusterRequest(ClusterRequestReceiver.java:219)
          at com.liferay.portal.cluster.ClusterRequestReceiver.receive(ClusterRequestReceiver.java:88)
          at org.jgroups.JChannel.invokeCallback(JChannel.java:749)
          at org.jgroups.JChannel.up(JChannel.java:710)
          at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1025)
          at org.jgroups.protocols.relay.RELAY2.up(RELAY2.java:338)
          at org.jgroups.protocols.FORWARD_TO_COORD.up(FORWARD_TO_COORD.java:153)
          at org.jgroups.protocols.pbcast.STATE_TRANSFER.up(STATE_TRANSFER.java:178)
          at org.jgroups.protocols.FRAG2.up(FRAG2.java:181)
          at org.jgroups.protocols.FlowControl.up(FlowControl.java:400)
          at org.jgroups.protocols.FlowControl.up(FlowControl.java:418)
          at org.jgroups.protocols.pbcast.GMS.up(GMS.java:896)
          at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:245)
          at org.jgroups.protocols.UNICAST.up(UNICAST.java:414)
          at org.jgroups.protocols.pbcast.NAKACK2.handleMessage(NAKACK2.java:763)
          at org.jgroups.protocols.pbcast.NAKACK2.up(NAKACK2.java:574)
          at org.jgroups.protocols.BARRIER.up(BARRIER.java:126)
          at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:147)
          at org.jgroups.protocols.FD.up(FD.java:253)
          at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288)
          at org.jgroups.protocols.MERGE2.up(MERGE2.java:205)
          at org.jgroups.protocols.Discovery.up(Discovery.java:359)
          at org.jgroups.protocols.TP$ProtocolAdapter.up(TP.java:2610)
          at org.jgroups.protocols.TP.passMessageUp(TP.java:1263)
          at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1825)
          at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1793)
          at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
          at java.lang.Thread.run(Thread.java:745)
    • When booting up several Liferay EE nodes (configured for clustering) simultaneously it appears you can run into a situation where all the nodes fail to boot and say they have “invalid licenses” (despite the fact you have valid cluster licenses installed under data/license)…and then your container processes are automatically shut down… nice
    • This appears to be due to the verification process that LicenseManager does on bootup where it receives license information from peer nodes, however if all nodes are started up at the same time it appears there is a condition of timing, when none of the nodes are yet in a “ready” state, so when peers connect to them for their license properties, they respond w/ nothing…. which is then compared against the local node’s license and so since they don’t “equals()” it decides the local license (despite being legit) is invalid..
    • Here is example output from NODE1 (booted up at same time as NODE2, note the blank remote license):
        • 13:55:20,381 ERROR [Incoming-5,shared=liferay-control][a:?] Remote license does not match local license. Local license: {productEntryName=Liferay Portal, startDate=11, expirationDate=22, description=Liferay Portal, owner=Liferay Portal, maxServers=4, licenseEntryName=Portal6 Non-Production (Developer Cluster), productVersion=6.1, type=developer-cluster, accountEntryName=Liferay Trial, version=2}
          Remote node: {clusterNodeId=11111111111, inetAddress=/192.168.0.23, port=-1}
          Remote license: {}
          Mixing licenses is not allowed. Local server is shutting down.
      • Here is example output from NODE2 (booted up at same time as NODE1):
        • 13:55:20,388 ERROR [Incoming-1,shared=liferay-control][a:?] Remote license does not match local license. Local license: {productEntryName=Liferay Portal, startDate=11, expirationDate=22, description=Liferay Portal, owner=Liferay Portal, maxServers=4, licenseEntryName=Portal6 Non-Production (Developer Cluster), productVersion=6.1, type=developer-cluster, accountEntryName=Liferay Trial, version=2}
          Remote node: {clusterNodeId=222222222222, inetAddress=/192.168.0.22, port=-1}
          Remote license: {}
          Mixing licenses is not allowed. Local server is shutting down.
      • How to solve this? The way I did it was to always boot one node first, with NO peers… wait for it to complete booting before bringing up all other nodes, then these errors seem to go away.

 

 JGroups RELAY2 notes

Liferay does NOT use RpcDispatcher but has its own “rpc” abstraction via “ClusterRequest” which is Serializable. (contains target addresses, MethodHandler, has a UUID, and type of EXECUTE) MethodHandler is just a serializable bag of arguments and a MethodKey which stores the target class, methodName and paramTypes. MethodHandler uses these deserialized parts on the receiving side to invoke the method locally via reflection

On the RECEIVING side is ClusterRequestReceiver which is the JGroups receiver that processes all inbound messages from peers. It takes the ClusterRequest and gets the MethodHandler (if type is EXECUTE) and invokes it, then returns the response as a ClusterNodeResponse

 

 

Clustering Liferay globally across data centers (GSLB) with JGroups and RELAY2

Recently I’ve have been looking into options to solve the problem of GSLB’ing (global server load balancing) a Liferay Portal instance.

This article is a work in progress… and a long one. Jan Eerdekens states it correctly in his article, “Configuring a Liferay cluster is part experience and part black magic” …. however doing it across data-centers however is like wielding black magic across N black holes….

Footnotes for this article are here: http://bitsofinfo.wordpress.com/2014/05/21/liferay-clustering-internals/

The objective is a typical one.

  • You have a large Liferay portal with users accessing the portal from lots of different location across the world
  • The Liferay “cluster” is hosted in a single data-center in a particular region
  • The users outside of that region complain that using Liferay is slow for them
  • Your goal: extend the Liferay cluster across multiple regions while keeping all the functionality intact and boosting response times for users across the world by globally load balancing (GLSB) the application so users in Asia hit a local data-center (DC) while users in South America hit a DC closer to them in their respective region
  • Solution: Good luck finding publicly available documentation on this for Liferay

Hopefully this article will help others out there, point them in a new direction and give them some ideas on how to put something like this together.

I’d like to note that this is not necessarily the ideal way to do this…. but just one way you can do it “out of the box” without extending Liferay in other ways. (I say this because there are many optimizations one could do for slimming down what Liferay sends around a cluster by default… i.e. lots of heavier object serializations as events happen) I’d also like to note that I am not a Liferay expert and I’m sure some things I am describing here are not 100% accurate. What is described below is the result of a lot of analysis of the uncommented, undocumented Liferay source code. I’m sure the developers at Liferay could provide additional insight, corrections and clarifications to what is stated below, but in lieu of design documentation, code-comments and the alike this is all that we in the community have to go on.

For my use case I tested this with two data-centers. One home-grown bare-metal DC in “regionA” and the other in AWS “regionB”. Point being we have two DC’s that are geographically separated over a WAN; could be dedicated line, VPN tunnel; whatever; point being the bandwidth available between DC’s is nothing compared to within DC.

 

Little bit of background:

Liferay has some light documentation for clustering, however this documentation is focused on a Liferay cluster within a single DC. I say “light documentation” because that is the nicest way I can state it. The Liferay project in general is quite void of any real documentation when it comes to how things work internally “under the hood” in the Liferay core codebase (design documents, code comments etc). If you want to know how things work, you have to crawl through tons of un-commented source code and figure it out for yourself.

First off Liferay uses JGroups under the hood (specifically JGroups version 3.2.10 in Liferay 6.2 as of this post). JGroups is pretty much one of the de-facto “go-to’s” for building clusters in Java applications and has been around a very long time; many Java stacks use this. If you want to see a good example of a open-source project with good design documents that explain the internals, see JGroups (hint, hint Liferay guys) I’m not going to go much further into describing JGroups, you can do that on your own; as I’ll be using some JGroups terminology below.

Liferay & JGroups basics:

Liferay defines two primary JGroups channels for what Liferay calls “cluster link”.  You enable this in your portal-ext.properties by setting cluster.link.enabled=true. By default all channels in Liferay are UDP (multicast); if you are trying to cluster Liferay in a DC that does not support multicast (like AWS) you will want to configure it to use unicast (see this article)

  • Channel “control” portal-ext.properties entry = cluster.link.channel.properties.control 
  • Channel “transport” portal-ext.properties entry = cluster.link.channel.properties.transport.0(?N)

Both of these channels are used for various things such as notifying other cluster members of a give nodes state (i.e. notifying they are up/down etc) and sending ClusterRequests to other nodes for invoking operations across the cluster. There does not seem to be any consistency as to why one is used over the other. For example streaming a Lucene index over from another node uses the control channel, which reindex() requests use the transport channel.

Out of the box, if you bring up more than one node in a local DC (configured w/ UDP multicast), the Liferay nodes will automatically discover each other, peer up in to a cluster and begin to send data to each other when appropriate. For unicast, again you have you make some changes to your portal-ext.properties to use unicast, but effectively the result is the same.

Great! Now what, we have have one DC with N local nodes that are all peered up with each other….. but how can this DC exchange state with DC2? Good question, when attempting to GSLB an application there are many considerations, specifically for Liferay the big ones that I noted that need to be addressed are below; note there are some more hidden in Liferay’s internals, but for the big picture lets just focus on these :)

Cluster “master” determination:

Liferay has the concept of a logical “master” and who is the “master” is determined by ownership of a named lock called “com.liferay.portal.cluster.ClusterMasterExecutorImpl” that resides in the “Lock_” table in the database. @see ClusterMasterTokenClusterEventListener. Note that the database is shared by all nodes across all DC’s (see database section below), and this presents a huge problem if clusters in separate DC’s (which are only locally aware of peer-DC-nodes by default w/ Jgroups) can’t talk to each other across DC’s; which is what this article is about. i.e. node1 in DC1 might acquire this Lock_ first, but the nodes in DC2 cannot communicate w/ the “master” because it is in an unreachable DC.

Database:

This is an enormous topic on its own, but for this article lets keep it dumb simple. Leverage Liferay’s reader-writer database configuration. For example in DC1 setup your master instance for your database and designate as your “write” database, then configure two read-slave instances; one slave in DC1 and another in DC2. In your portal-ext.properties files for nodes in both DC’s configure “jdbc.write.url” to hit the master instance in DC1 and “jdbc.read.url” to hit whichever read-slave instance is local within each DC.

Cached data:

Liferay leverages Ehcache for caching within the application. Clustering for Ehcache can be enabled by reading this article. The configuration relies on a separate JGroups channel within the Liferay cluster and you need to properly configure it for unicast if your DC does not support multicast just like previously described.

Fire up two separate DC’s pointing to a database setup as previously described, go change some data in the app via a node in DC1, and because it is likely cached, when you view that same page/data in DC2 you won’t see the change visible in the UI. Why? Well because the clustering is only local to each DC. When model objects are updated in Liferay they are saved in the database, and then an event occurs (distributed via JGroups) that tells peer nodes to dump the cache entry…. but only local to that DC. So you say, “well why not just enable unicast for all nodes in every DC so they are aware of all other nodes in all other DCs?” You could, but imagine the cross-talk; no thanks. There are a few solutions to this, one will be described below (via RELAY2) and another could be provided by the ehcache-jms-wan-replicator project.

Indexed data:

Liferay’s default search index is implemented using Lucene. There are several ways to configure Lucene for a Liferay cluster... but for this article lets keep it simple and not setup Solr to avoid the complexity of having to GSLB that as well… and just enable “lucene.replicate.writes=true”. So each peer node within a local DC has its own Lucene index copy, and once again Liferay leverages JGroups (via ClusterExecutorImpl) triggered by all sorts of fancy AOP (see IndexableAdvice.java, @Indexable annotation, IndexWriterProxyBean + review search-spring.xml in the codebase) to essentially intercept a index write and broadcast a”reindex this thing” message to peer nodes in the local DC cluster. Note that Liferay sends the entire object w/ data to be reindexed to all peer nodes over the JGroups channel (which is not necessarily efficient over a WAN). Secondly, when you go to Server Administration in the GUI and hit the “reindex all index data” button, the server you invoke this on also invokes that operation against all peer nodes. Lastly, another hidden thing is that peer nodes will suck over the entire Lucene index via an HTTP call on startup from a donor node…..again we’ll touch on this later and the considerations to think about.

Again, fire up two separate DC’s pointing to a database/caching setup as previously described. Go to control panel and add a new user in DC1. Great you see the new user in the user’s screen when accessing through a DC1 node. Go view the users from DC2’s perspective. You won’t see the user, nor can you find them in a search despite them being in the database. Why? Well two things, first that Lucene “reindex this thing” message did not make it to DC2, and secondly (unless at this point you have either RELAY2 setup OR ehcache-jms-wan-replicator configured) these screens are also reliant on what is in Ehcache and a combination of what is in the local Lucene index.

Document library files:

This is most definitely a consideration for GSLB’ing Liferay across DC’s, however it really does not have anything to do w/ JGroups and RELAY2 in particular so I’m not going to discuss it here. I’ll point you in a direction… consider putting Liferay’s files in S3 and abstracting that with something like YAS3FS which is quite promising for building a distributed file store with local node CDN read performance. Much faster than Liferay’s S3 implementation and globally aware of changes to the FS.

Job scheduling:

Liferay has two classes of “jobs” (implemented via Quartz); master jobs which run only on ONE node, the “master” node and “slave” jobs which can run on any node. Again who is the “master” job runner is determined by an entry in the Lock_ table named “com.liferay.portal.kernel.scheduler.SchedulerEngine”. Who gets the lock is essentially which server boots up and acquires it first. The same problem that exists as noted above with regards to slave->master communication exists here in separated DC to DC environments where the nodes in separate DC’s cannot talk to one another over a WAN. Liferay has a few job types denoted by com.liferay.portal.kernel.scheduler.StorageType of:

  • MEMORY: run on any node and transient in memory
  • MEMORY_CLUSTERED: run only on the “master” node but transient in memory
  • PERSISTED: runs on any node but state persisted in database

Point being is that the job engine in Liferay has a dependency to be ably to communicate to all other nodes.

“Live Users”:

If you have the “live.users.enabled=true” option set in your portal-ext.properties you can do user monitoring in the cluster and this as well, if clustering is enabled needs to see all nodes in the cluster. When a cluster node comes up it sends a ClusterMessageType.NOTIFY/UPDATE which goes to all nodes which in turn broadcast a local ClusterEvent.JOIN which is reacted to by sending a unicast ClusterRequest.EXECUTE for LiveUsers.getLocalClusterUsers() on the remote node to effectively sign-in the users on the other node locally on the requesting node. This appears to be there so that each node will locally reflect all logged in users across the cluster. Again this will be incomplete if node1 on DC1 cannot talk to node2 @ DC2.

Other:

There are a few other functions in Liferay that appear to leverage the underlying Jgroups channels (i.e. EE Licensing, JarUtil, DataSourceFactoryUtil, SetupWizard, PortalManagerUtil.manage() (related to JMX?), PortalImpl.resetCDNHosts() and EhcacheStreamBootstrapCacheLoader. You can see my other notes article for these tidbits.

Debugging/Logging:

It might be helpful to diagnose whats going on by tweaking the logging settings for Liferay. To do this you should read this article to permanently change your log settings (which will dump more info on bootup which is important). Don’t rely on changing your logging settings via the GUI screen as those are not permanently saved and are transient. Below is an example webapps/ROOT/WEB-INF/classes/META-INF/portal-log4j-ext.xml file I used to triage various issues on bootup related to clustering.


<?xml version="1.0"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">

<category name="com.liferay.portal.cluster">
<priority value="TRACE" />
</category>

<category name="com.liferay.portal.license">
<priority value="TRACE" />
</category>

<category name="org.jgroups">
<priority value="DEBUG" />
</category>

<category name="com.liferay.portal.kernel.cluster">
<priority value="TRACE" />
</category>

</log4j:configuration>

 

How JGroups RELAY2 can bridge the gap.

So at this point it should be clear to anyone reading that we need a way to get these separated clusters in DC1 and DC2 to talk to one another. First off we could just change the control/transport channels in Liferay to force use TCP UNICAST and specifically list all nodes globally across DC1 and DC2. This would let every node know about every other node globally, however this won’t scale well as each node would talk to every other node. The other option is the RELAY2 functionality available in JGroups.

Essentially what RELAY2 provides is a “bridge” between N different JGroups clusters that are physically separated by a WAN or other network arrangement. By adding RELAY2 into your JGroups stack you can somewhat easily ensure that ALL messages that are passed through a JGroups channel will be distributed over the bridge to all other JGroups clusters in other DCs. Think of RELAY2 as secondary “cluster” in addition to your local cluster, however only ONE node in your local JGroups cluster (the coordinator) is responsible for “relaying” all messages over the bridge to the other “coordinators” in the relay cluster, for distribution to the other coordinators local JGroups cluster. So “out of the box” this can let you ensure that all Liferay cluster operations that occur in DC1 get transmitted to DC2 transparently. So with this enabled, when we add that user to our local Lucene index across all local nodes via UDP in the local cluster, the coordinator node in DC2 will also receive that event and transmit it locally to all nodes in the DC2 cluster. Now when you view the “users” on node2 in DC2 you will see the data that was added by node1 in DC1

One IMPORTANT caveat is that using the RPC building blocks in JGroups/RELAY2 does not effectively cascacde RPC calls over the bridge. See here and this thread. To Liferay’s credit they did not implement the “RPC” like method invocations across Liferay clusters’s using RPC in JGroups (i’m not sure why) but rather they serialize ClusterRequests which encapsulate what is to be invoked on the other side via method reflection, and just send these messages as serialized objects over the wire. Had they used RPC one would have to modify Liferay’s code to get these RPC invocations across the RELAY2 bridge.

Screen Shot 2014-05-30 at 3.44.38 PM

How to configure in Liferay:

What I am describing below assumes you are just using the Liferay cluster defaults of a local UDP multicast cluster, again if you are in AWS you will just need to adjust your unicast TCP JGroups stack accordingly, the configuration is pretty much the same w/ regards to where RELAY2 in configured the stack

IMPORTANT: this only covers the transport and control channels. If you want to enable this kind of relay bridge for the separate Ehcache channels that Liferay uses, you will repeat the process (described in the steps below) for the Ehcache JGroups channel definitions in Liferay as well… summary high-level steps below (note alternatively you could leave the ehcache jgroups configuration alone in liferay and just leverage the ehcache jms wan replicator.)

  • UNCOMMENT: ehcache.cache.manager.peer.provider.factory = net.sf.ehcache.distribution.jgroups.JGroupsCacheManagerPeerProviderFactory
  • MODIFY: “ehcache.multi.vm.config.location.peerProviderProperties” and add a “connect” property to manually define a JGroups stack that incorporates RELAY2 similar in the fashion to how we do it below for the “control” and “transport” channels. @see the ehcache documentation here.

 

Ok, first lets define our “relay clusters” for both the control and transport channels in Liferay. Note all the steps below need to be done for all nodes across all DC’s and you need to adjust certain things relative to what DC you are running in, particularly the “site” names in the RELAY2 configurations

1. Create a file in your WEB-INF/classes dir called “relay2-transport.xml”.

<RelayConfiguration xmlns="urn:jgroups:relay:1.0">
    <sites>
        <site name="dc1" id="0">
            <bridges>
                <bridge config="relay2_global_transport_tcp.xml" name="global_transport" />
            </bridges>
        </site>

        <site name="dc2" id="1">
            <bridges>
                <bridge config="relay2_global_transport_tcp.xml" name="global_transport" />
            </bridges>
        </site>
    </sites>
</RelayConfiguration>

2. Create a file in your WEB-INF/classes dir called “relay2-control.xml”

<RelayConfiguration xmlns="urn:jgroups:relay:1.0">
    <sites>
        <site name="dc1" id="0">
            <bridges>
                <bridge config="relay2_global_control_tcp.xml" name="global_control" />
            </bridges>
        </site>

        <site name="dc2" id="1">
            <bridges>
                <bridge config="relay2_global_control_tcp.xml" name="global_control" />
            </bridges>
        </site>
    </sites>
</RelayConfiguration>

3. Next we need to configure the actual TCP relay clusters for both the control/transport relay configurations. The sample below is a TEMPLATE you can use and copy twice, once for “relay2_global_control_tcp.xml” and another for “relay2_global_transport_tcp.xml”. Change the TCP.bind_port and TCPPING.initialhosts appropriately for each file. For TCPPING.inititalhosts, you will want to list ONE known node that lives in each DC. These will be the initial relay coordinator nodes.

    <config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xmlns="urn:org:jgroups"
        xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups.xsd">

    <TCP
         bind_port="7800"
         recv_buf_size="${tcp.recv_buf_size:5M}"
         send_buf_size="${tcp.send_buf_size:5M}"
         max_bundle_size="64K"
         max_bundle_timeout="30"
         use_send_queues="true"
         sock_conn_timeout="300"

         timer_type="new"
         timer.min_threads="4"
         timer.max_threads="10"
         timer.keep_alive_time="3000"
         timer.queue_max_size="500"

         thread_pool.enabled="true"
         thread_pool.min_threads="2"
         thread_pool.max_threads="8"
         thread_pool.keep_alive_time="5000"
         thread_pool.queue_enabled="true"
         thread_pool.queue_max_size="10000"
         thread_pool.rejection_policy="discard"

         oob_thread_pool.enabled="true"
         oob_thread_pool.min_threads="1"
         oob_thread_pool.max_threads="8"
         oob_thread_pool.keep_alive_time="5000"
         oob_thread_pool.queue_enabled="false"
         oob_thread_pool.queue_max_size="100"
         oob_thread_pool.rejection_policy="discard"/>

    <TCPPING timeout="3000"
             initial_hosts="${jgroups.tcpping.initial_hosts:IP_OF_DC1_NODE[7800],IP_OF_DC2_NODE[7800]}"
             port_range="0"
             num_initial_members="10"/>
    <MERGE2  min_interval="10000"
             max_interval="30000"/>
    <FD_SOCK/>
    <FD timeout="3000" max_tries="3" />
    <VERIFY_SUSPECT timeout="1500"  />
    <BARRIER />
    <pbcast.NAKACK2 use_mcast_xmit="false"
                   discard_delivered_msgs="true"/>
    <UNICAST2 />
    <pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
                   max_bytes="4M"/>
    <pbcast.GMS print_local_addr="true" join_timeout="3000"

                view_bundling="true"/>
    <MFC max_credits="2M"
         min_threshold="0.4"/>
    <FRAG2 frag_size="60K"  />
    <!--RSVP resend_interval="2000" timeout="10000"/-->
    <pbcast.STATE_TRANSFER/>
</config>

 

4. At this point we have the relay cluster configuration defined. Now we need to adjust Liferay’s “control” and “transport” channel JGroups stack to utilize them. Open up portal-ext.properties and add the following to the control and transport channels. Note these are long lines, copied from portal.properties default to portal-ext.properties and then appending to the stack:

:FORWARD_TO_COORD:relay.RELAY2(site=[DC1 | DC2];config=${configRootDir}/relay2-[transport|control].xml;relay_multicasts=true)

IMPORTANT: be sure to set the “site=” property of RELAY2 to match the DC you are configuring this for and must match the “site” name in the relay2-transport.xml and relay2-control.xml files accordingly.


cluster.link.channel.properties.control=UDP(bind_addr=${cluster.link.bind.addr["cluster-link-control"]};mcast_group_addr=${multicast.group.address["cluster-link-control"]};mcast_port=${multicast.group.port["cluster-link-control"]}):PING(timeout=2000;num_initial_members=20;break_on_coord_rsp=true):MERGE3(min_interval=10000;max_interval=30000):FD_SOCK:FD_ALL:VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK2(xmit_interval=1000;xmit_table_num_rows=100;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=30000;max_msg_batch_size=500;use_mcast_xmit=false;discard_delivered_msgs=true):UNICAST2(max_bytes=10M;xmit_table_num_rows=100;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=60000;max_msg_batch_size=500):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=4M):pbcast.GMS(join_timeout=3000;print_local_addr=true;view_bundling=true):UFC(max_credits=2M;min_threshold=0.4):MFC(max_credits=2M;min_threshold=0.4):FRAG2(frag_size=61440):RSVP(resend_interval=2000;timeout=10000):FORWARD_TO_COORD:relay.RELAY2(site=[DC1 | DC2];config=${configRootDir}/relay2-control.xml;relay_multicasts=true)

cluster.link.channel.properties.transport.0=UDP(bind_addr=${cluster.link.bind.addr["cluster-link-udp"]};mcast_group_addr=${multicast.group.address["cluster-link-udp"]};mcast_port=${multicast.group.port["cluster-link-udp"]}):PING(timeout=2000;num_initial_members=20;break_on_coord_rsp=true):MERGE3(min_interval=10000;max_interval=30000):FD_SOCK:FD_ALL:VERIFY_SUSPECT(timeout=1500):pbcast.NAKACK2(xmit_interval=1000;xmit_table_num_rows=100;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=30000;max_msg_batch_size=500;use_mcast_xmit=false;discard_delivered_msgs=true):UNICAST2(max_bytes=10M;xmit_table_num_rows=100;xmit_table_msgs_per_row=2000;xmit_table_max_compaction_time=60000;max_msg_batch_size=500):pbcast.STABLE(stability_delay=1000;desired_avg_gossip=50000;max_bytes=4M):pbcast.GMS(join_timeout=3000;print_local_addr=true;view_bundling=true):UFC(max_credits=2M;min_threshold=0.4):MFC(max_credits=2M;min_threshold=0.4):FRAG2(frag_size=61440):RSVP(resend_interval=2000;timeout=10000):FORWARD_TO_COORD:relay.RELAY2(site=[DC1 | DC2];config=${configRootDir}/relay2-transport.xml;relay_multicasts=true)

 

 5. Ok, now startup Liferay and adjust its startup java options to add the following. This is required so that the JGroups stacks can find the relay2-transport.xml and relay2-control.xml relay configuration files.

“-DconfigRootDir=/path/to/dir/with/relay2/config/files”

 6. In Liferay’s logs you should see some additional JGroups control/transport channels showing up that represent the bridge between the report sites.

 

At this point you should now have two separate Liferay clusters in two separate data-centers “bridged” using JGroups RELAY2 so that many of the issues described in this article are resolved and Liferay cluster events/messages are received across both data-centers. Note that unless you also tweaked the Ehcache JGroups configuration (as noted earlier) or are using the ehcache jms wan replicator, the Ehcache clustered cache replication events will not be sent to the other DC’s

That said, again this is not necessarily the best way to do this. What is the best way is to be determined as this is an experiment in progress. There are many things that may or may not really be necessary to warrant utilizing RELAY2 as the means to bridge multiple separated Liferay clusters. Liferay can generate a lot of cluster traffic and if you are bridging over a WAN this may not be efficient, or could potentially block operations in DC1 while waiting on a coordinator or timeout in DC2, resulting in perceived slowness on the sender side depending on if Liferay is invoking things remotely via ClusterRequests synchronously or asynchronously.

The latter could potentially be alleviated by tweaking your TCP bridge configuration to optimize the TCP “max_bundle_size” and “max_bundle_timeout” parameters. Doing so you could reduce the sheer amount of messages sent back and forth over the bridge. I.E. let the bridge queue up N messages or until the total amount of data to be sent is >= N(size); effectively “batching” the data to be sent around the WAN. Note when tweaking these configuration settings you may encounter this kind of warning, that you will need to adjust your OS settings accordingly :

“WARN  [localhost-startStop-1][UDP:547] [JGRP00014] the receive buffer of socket MulticastSocket was set to 500KB, but the OS only allocated 124.93KB. This might lead to performance problems. Please set your max receive buffer in the OS correctly (e.g. net.core.rmem_max on Linux)”

Potential alternatives to RELAY2 are implementing point-specific solutions for transmitting the most visible/critical information to other DC’s such as cache events in a batched optimized fashion by using something like the Ehcache JMS WAN replicator. Another example is writing an extension for Lifferay which would do something similar for batching “reindex()” requests across DC’s, so rather than relying on RELAY2 which would transmit full Lucene documents over the wire… one-by-one… and extension could be developed to batch these in an optimized fashion where only model ids, type and operation request are conveyed rather than the entire Lucene document.

Also note that because Liferay streams the entire contents of its Lucene indexes to peer nodes…  it is important to understand “who” Liferay considers a peer node. This is determined by calling its “control” JChannel’s, “receiver” who is Liferay’s BaseReceiver’s getView() method which should only return local DC peers, and not those across a RELAY2 bridge which should avoid it connecting to a node in a WAN separated DC to stream the index (note according to these docs, a JGroups channel stack with RELAY2 enabled does NOT return views that span bridges).  If you didn’t use RELAY2 but instead manually configured a giant UNICAST cluster across WANs, one would have to consider how clusters boot up, because if you start one node in DC1, then node2 in DC1 would get its index update from node1(DC1) (which is fine because they would be on the same local network). However when node3 comes up in DC2, because it recognizes the “master” node (via the shared database Lock_ table entry) as living in DC1 it would have to stream its index over the WAN. I’d also like to note that Liferay WILL attempt to stream an index from a node in another DC if you do a manual “reindex” that is initiated by a remote DC, however this will fail due to the way the ClusterRequest to reindex() in other DC’s sends along the jgroups “address” (see the footnotes article section on “ClusterLoadingSyncJob” for more details.)

There is also one other oddity I noted in “who” Liferay considers are “peers” when RELAY2 is used. See the section on ClusterMasterExecutorImpl in the footnotes for more information.

Regardless, doing any of the latter customizations requires analysis of Liferays behavior/code to determine the impact on what “events” in Liferay could be missed by not using RELAY2 (which will catch everything). You also would then be responsible for keeping tabs on what changes as Liferay does new releases, modifies the way one particular “clusterable” action behaves or adds totally new features that send messages over the cluster.

Hopefully this document will help others out there trying to solve this kind of problem and save others some valuable time!

 

 

 

 

 

Testing yas3fs: a distributed S3 FUSE filesystem

I’ve recently been doing quite a bit of evaluation of  a few S3 filesystems, one in particular is yas3fs which so far is quite impressive. I plan on doing a more detailed post about it later, however for now I’d like to share a little tool I wrote to help me in my evaluation of it. You can check it out at https://github.com/bitsofinfo/yas3fs-cluster-tester

Part 2: Nevado JMS, Ehcache JMS WAN replication and AWS

This post is a followup to what is now part one, of my research into using Ehcache, JMS, Nevado, AWS, SNS and SQS to permit cache events to be replicated across different data-centers.

@see https://github.com/bitsofinfo/ehcache-jms-wan-replicator

In my first post I was able to successfully get Ehcache’s JMS Replication to function using Nevado JMS as the provider after patching the serialization issues, however as noted in that article, the idea of sending the actual cached data when any put/update occurred (if the cache replicator is configured that way) in Ehcache on any given node sounded like it might get out of hand. Secondly, polling from SNS is slow! Apps can generate thousands of remove events that need to be distributed globally, consumers in other DC’s will get way behind; batching optimized for SNS is needed. Given that I started looking at writing my own prototype of something lighter weight but eventually came to the realization that the existing Ehcache JMS replication framework could be customized/extended to permit the modifications that were needed.

Out of this came the ehcache-jms-wan-replicator project on GitHub. There is a diagram below showing the concept in general however I suggest you read the README.md on Github instead because as this research evolves I’ll be updating that project. So far it seems to work pretty well and is plays fine running side-by-side with any existing Ehcache (RMI/JGroups) replication you already have configured. This is been intergrated in a Liferay 6.2 Portal cluster across two data-centers in a test setup and so far is working as expected.

Hopefully this will give others some ideas or be useful to someone else as well.

Ehcache replicated caching with JMS, AWS, SQS, SNS & Nevado

Read part 2 of this research here

Recently I’ve been researching ways to GSLB a very large app that relies on Ehcache for numerous things internally such as; cached page output, cached lists etc etc. Anyone who has experience getting large scale applications to function active-N in a GSLB’d setup knows the challenges such topologies present. The typical challenge you will face is how to bridge events that occur in locally (dc) clustered applications, for example in: DC-A (data center), with another logical instance footprint of the same application living in DC-B. This extends all the way from the from the data-source, all the way up the stack.

So for example, lets say user A is accessing the application and hitting instances of it residing in DC-A. This user updates some inventory data that is cached locally in the cluster in DC-A; subsequently this cached inventory also resides in the cluster running in DC-B (also being access by different users in some other region). When user A updates this inventory data, the local application instance, writes it to the data-source, and then does some sort of cache operation, such as a cache key remove, or put (replace). Forgetting the entirely separate issue of how the data-source write is itself is visible across both DC’s, point being is that the cache update in DC-A is visible only to participating instances in DC-A….. DC-B’s cache knows nothing of this; only its data-source is aware of this new information…. so we need a way to get DC-B aware this cache event. There are a few ways this can happen; for example we could just configure the caches to rely solely or LRU/TTL driven expiry, or actually respond to events in a near-real-time fashion.

Now before we go on I’ll state up-front that despite what I am about to describe would work (to an extent), ultimately I likely will NOT go with this setup due to the inefficiencies involved, particularly the amount of data being moved across WANs if you just use the Ehcache JMS replicated caching feature alone. (i.e. cached data is actually moved around, rather than just operation events with the JMS replicated Ehcache feature)

Continuing with that train of thought, after the latter caveat…. so one thing I started looking at was the Ehcache JMS Replicated Caching feature. Basically this feature boils down to permitting you to configure any cache to utilize JMS (Java message service) for publishing cache events. So when a PUT/REMOVE happens, Ehcache wires up a cache listener that responds and subsequently relays these events (including the cached data on puts) to a JMS topic. Then any other Ehcache node configured w/ this same setup can subscribe to those topics and receive those events. Couple this with a globally accessible messaging system, you now can have a backbone for distributing these events across multiple disparate data-centers…… but who in the hell wants to setup their own globally accessible, fault-tolerant messaging system implementation…. not me.

Enter AWS’s SNS (Simple Notification Service,  topics) & SQS (Simple queuing service) services. I decided I’d try to get Ehcache’s JMS Replicated Caching feature to utilize AWS as the JMS provider….. now enter Nevado JMS from the Skyscreamer Team. (github link). Nevado is a cool little JMS implementation that front’s SNS/SQS, and it works pretty good!

Note the code is at the end of this post ….. and yes the code is very basic and NOT production ready/tested; it was just for a prototype/research and is a bit hacked together. Also note this code is reliant upon this PATCH to Nevado, which is pending discussion/approval

  1. The first step was creating an Ehcache CacheManagerPeerProviderFactory (NevadoJMSCacheManagerPeerProviderFactory), which returns a JMSCacheManagerPeerProvider to Ehcache that is configured to use Nevado on the backend
  2. The NevadoJMSCacheManagerPeerProviderFactory boots a little spring context that sets up the NevadoTopic etc
  3. Created a little test program (below) EhcacheNevadoJMSTest. I just ran several instances of this concurrently w/ breakpoints to validate that events in one JVM/ehcache instance were indeed being broadcast over JMS -> AWS -> back to other Ehcache instances on other JVM instances.
  4. The first thing I noticed was that while the events were indeed being sent over JMS to AWS and received by other Ehcache peers, the actual cached data (Element) embedded within the JMSEventMessage were NOT being included, resulting in NullPointerException’s by the Ehcache peers who received the event.
  5. The latter was due to an Object serialization issue, and transient soft references as described in this Nevado Github issue #81
  6. Once I created a patch for Nevado to use the ObjectOutputStream things worked perfectly.

CAVEATS

  • Again this code was for research/prototyping
  • The viability of having the actual cached element being moved around to AWS, across WANs and back to other data-centers is likely not too optimal. It would work, but under high-volume you could spend a lot of $$ and bandwidth.
  • SQS/SNS has message size limitations…. which if your cached data is beyond that would get truncated and lost effectively making the solution useless.
  • Ideally, all one really cares about is “what happened”, meaning Ehcache KEY-A was PUT or REMOVED etc. Then let the receiving DC decide what to do (i.e. remove the cached KEY locally and let next user driven request re-populated from the primary source, the real data-source). This results in much smaller message sizes. The latter is what I’m now looking at, using the Ehcache listener framework w/ some custom calls to SNS/SQS would suffice for this kind of implementation.

 

 

CODE SNIPPETS

Github patch for Nevado @ https://github.com/skyscreamer/nevado/issues/81

NevadoJMSCacheManagerPeerProviderFactory, Ehcache uses this as its cacheManagerPeerProviderFactory


package com.bitsofinfo.ehcache.jms;

import java.util.Properties;

import javax.jms.ConnectionFactory;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.Topic;
import javax.jms.TopicConnection;

import org.skyscreamer.nevado.jms.NevadoConnectionFactory;
import org.skyscreamer.nevado.jms.destination.NevadoQueue;
import org.skyscreamer.nevado.jms.destination.NevadoTopic;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import net.sf.ehcache.CacheManager;
import net.sf.ehcache.distribution.CacheManagerPeerProvider;
import net.sf.ehcache.distribution.CacheManagerPeerProviderFactory;
import net.sf.ehcache.distribution.jms.AcknowledgementMode;
import net.sf.ehcache.distribution.jms.JMSCacheManagerPeerProvider;

public class NevadoJMSCacheManagerPeerProviderFactory extends CacheManagerPeerProviderFactory {

    @Override
    public CacheManagerPeerProvider createCachePeerProvider(CacheManager cacheManager, Properties props) {
        
        try {
        
            ApplicationContext context = new ClassPathXmlApplicationContext("/com/bitsofinfo/ehcache/jms/nevado.xml");
            
            NevadoConnectionFactory nevadoConnectionFactory = (NevadoConnectionFactory)context.getBean("connectionFactory");
            TopicConnection topicConnection = nevadoConnectionFactory.createTopicConnection();
            QueueConnection queueConnection = nevadoConnectionFactory.createQueueConnection();
            
            Topic nevadoTopic = (NevadoTopic)context.getBean("ehcacheJMSTopic");
            Queue nevadoQueue = (NevadoQueue)context.getBean("ehcacheJMSQueue");
            
            return new JMSCacheManagerPeerProvider(cacheManager,
                                                   topicConnection,
                                                   nevadoTopic,
                                                   queueConnection,
                                                   nevadoQueue,
                                                   AcknowledgementMode.AUTO_ACKNOWLEDGE,
                                                   true);
        } catch(Exception e) {
            e.printStackTrace();
            return null;
        }
    }

}

 

 

nevado.xml (NevadoJMSCacheManagerPeerProviderFactory boots this to init Nevado topic/queue @ AWS)


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
           http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
          
           http://www.springframework.org/schema/context
           http://www.springframework.org/schema/context/spring-context-3.0.xsd">
           
      
    <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
      <property name="locations">
        <list>
          <value>classpath:/com/bitsofinfo/ehcache/jms/aws.properties</value>
        </list>
      </property>
    </bean>
           
    <bean id="sqsConnectorFactory" class="org.skyscreamer.nevado.jms.connector.amazonaws.AmazonAwsSQSConnectorFactory" />
    
    <bean id="connectionFactory" class="org.skyscreamer.nevado.jms.NevadoConnectionFactory">
      <property name="sqsConnectorFactory" ref="sqsConnectorFactory" />
      <property name="awsAccessKey" value="${aws.accessKey}" />
      <property name="awsSecretKey" value="${aws.secretKey}" />
    </bean>
    
    <bean id="ehcacheJMSTopic" class="org.skyscreamer.nevado.jms.destination.NevadoTopic">
          <constructor-arg value="ehcacheJMSTopic" />
    </bean>
    
    <bean id="ehcacheJMSQueue" class="org.skyscreamer.nevado.jms.destination.NevadoQueue">
          <constructor-arg value="ehcacheJMSQueue" />
    </bean>

    
           
 </beans>

 

 

ehcache.xml

<ehcache>

   <diskStore path="user.home/ehcacheJMS"/>

    
    <cacheManagerPeerProviderFactory
       class="com.bitsofinfo.ehcache.jms.NevadoJMSCacheManagerPeerProviderFactory"
       properties=""
       propertySeparator="," />
       
       
     <cache name="testCache"
           maxElementsInMemory="1000"
           maxElementsOnDisk="2000"
           eternal="false"
           overflowToDisk="false"
           memoryStoreEvictionPolicy="LRU"
           transactionalMode="off">
           
           <cacheEventListenerFactory
                 class="net.sf.ehcache.distribution.jms.JMSCacheReplicatorFactory"
                 properties="replicateAsynchronously=true,
                              replicatePuts=true,
                              replicateUpdates=true,
                              replicateUpdatesViaCopy=true,
                              replicateRemovals=true,
                              asynchronousReplicationIntervalMillis=1000"
                  propertySeparator=","/>

      </cache>
    
       
</ehcache>

EhcacheNevadoJMSTest – little test harness program, run multiple instances of this w/ breakpoints to see ehcache utilize JMS(nevado/sns) to broadcast cache events


package com.bitsofinfo.ehcache.jms;

import net.sf.ehcache.Cache;
import net.sf.ehcache.CacheManager;
import net.sf.ehcache.Element;

import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

public class EhcacheNevadoJMSTest {

    public static void main(String[] args) throws Exception {
        
        ApplicationContext context = new ClassPathXmlApplicationContext("/com/bitsofinfo/ehcache/jms/bootstrap.xml");
        
        CacheManager cacheManager = (CacheManager)context.getBean("cacheManager");
        
        Cache testCache =cacheManager.getCache("testCache");

        Element key1 = testCache.get("key1");
        Element key2 = testCache.get("key2");
        key1 = testCache.get("key1");
        
        testCache.put(new Element("key1", "value1"));
        testCache.put(new Element("key2", "value2"));
        testCache.remove("key1");
    }

}

 

 

bootstrap.xml – used by the test harness


<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans-3.0.xsd


http://www.springframework.org/schema/context


http://www.springframework.org/schema/context/spring-context-3.0.xsd">

<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean">
<property name="configLocation" value="classpath:/com/bitsofinfo/ehcache/jms/ehcache.xml"/>
</bean>

</beans>

 

 

 

 

 

 

 

 

 

Ideas worth spreading

If you haven’t been exposed to Nathan Marz’s ideas on Big Data, the following links are definitely worth your time:

http://manning.com/marz/

http://www.infoq.com/presentations/Complexity-Big-Data

http://nathanmarz.com/speaking/

Follow

Get every new post delivered to your Inbox.

Join 28 other followers