Tagged: REST

Microservices with Spring Cloud & Docker

In the recent past, a team I was working with was facing an architectural decision regarding what technology and deployment footprint to go with for a greenfield project.

Its been about five months now since this application has been in production.

Use case:

The use-case in question was to present a suite of REST services to front a large set of “master data” dimensions for a data warehouse as well securing that data (record level ACLs). In addition to this, the security ACLs it would manage needed to be transformed and pushed downstream to various legacy systems.

With that general use-case in mind, some other items of note that were to be considered:

  • The specific use case of “REST services facade for master data” was generic in nature, so the security model was to be agnostic of the specific data set being secured and have the capability of being applied across different data sets for different clients.
  •  Changes for a given service should be easy to fix and deploy and independent of one another with minimal interruption.
  • The services need to scale easily and be deployed across several different data centers which are a mix of traditional bare-metal/ESX vm’s as well as in the cloud (azure/aws). Tight coupling to each DC should be minimized when possible.
  • The services stack would potentially serve as the hub for orchestrating various other ETL related processes for the warehouse, so adding new “services” should be easy to integrate into the larger application.
  • Given the sensitivity of the data, all traffic should be secured w/ TLS and REST apis locked down w/ OAuth2 client credentials based access.

Given the above requirements and much discussion we decided to go with a container based microservices architecture.

Why?

First off, this team already had significant experience w/ the traditional monolithic approach to applications and had already run into the many shortcomings of this architecture over the long term. As new features needed to be deployed, it was becoming more of a pain to add new “services” to the monolith as it required the entire stack to be redeployed which is disruptive. Given this new application would have a similar lifecycle (new services needing to be added over time) we wanted to try a different approach…. and who was the new kid on the block? “microservices”; and it was time to get one’s feet wet.

This shop was primarily focused on NodeJS, LAMP and Java stacks so after doing some research the decision was made to go with Spring Cloud as the base framework to build this new suite of services. If one does any reading on the topic of microservices, you will quickly see such architectures involve many moving parts: service discovery, configuration, calling tracing (i.e. think google dapper), load balancing etc.

Do you want to write these pattern implementations this all yourself? Probably not; I sure didn’t. So after evaluating the space at the time, Spring Cloud was the most robust solution for this and one of its biggest selling points is that it was based on many of the great frameworks that have come out of Netflix’s OSS project (Eureka, Hystrix and more..)

Lastly the decision to go w/ Docker was really a no brainer. The services would potentially need to be deployed and moved across various data centers. By using Docker DevOps would be able to have a footprint and deployment process that would be consistent regardless of what data center we would be pushing to. The only data center specific particulars our DevOps guys had to care about was, setting up the Docker infrastructure (i.e. think Docker hosts on VMs via Ansible coupling to DC specific host provisioning APIs) and the DC specific load balancers, who’s coupling to the application was just a few IP’s and ports (i.e. the IPs of the swarm nodes with exposed ports of our Zuul containers). Everything downstream from that was handled by Docker Swarm and the microservices framework itself (discovery, routing etc)

CELL

The acronym for this services backend ended up being CELL which stands for… well whatever you want it to stand for…. I guess think of it (the app) as an organism made up of various cells (services). CELL’s services are consumed by various applications that present nice user interfaces to end users.

Screen Shot 2017-05-16 at 1.34.04 PM

The above diagram gives a high level breakdown of its footprint. Its broken up into several services:

Core services that all other app services utilize:

  • cell-discovery: Netflix Eureka: Participating services both register on startup and use this to discover the cell-config service (to bootstrap themselves) plus discover any other peer level services they need to talk to.
  • cell-config: spring-cloud-config: Git sourced application configuration (w/ encryption support). Each application connects to this on startup to configure itself.
  • oauth2-provider: All services are configured w/ an OAuth2 client credentials compliant token generation endpoint to authenticate and get tokens that all peer services validate (acting as resource servers)
  • tracing-service: zipkin: All services are instrumented w/ hooks that decorate all outbound http requests (and interpret them upon reception) with zipkin compliant tracing headers to collect call tracing metrics etc. Background threads send this data periodically to the tracing service.
  • cell-event-bus: kafka and spring-cloud-stream: Certain services publish events that other services subscribe to to maintain local caches or react to logic events. This provides a bit looser coupling than direct service to service communication; leveraging Kafka gives us the ability to take advantage of such concepts of consumer groups for different processing requirements. (i.e. all or one)
  • cell-router: Netflix zuul: Router instances provide a single point of access to all application services under a https://router/service-name/ facade (discovered via the discovery service). Upstream data center specific FQDN bound load balancers only need to know about the published ports for the Zuul routers on the swarm cluster to be able to access any application service that is available in CELL.
  • cell-service-1-N: These represent domain specific application services that contain the actual business logic implementation invoked via external callers. Over time, more of these will be added to CELL and this is where the real modularity comes into play. We try to stick to the principle of one specific service per specific business logic use-case.

CELL Security

As noted above, one of the requirements for CELL was that participating services could have data they manage, gated by a generic security ACL system. To fulfill this requirement, one of those domain specific apps is the cell-security service.

Screen Shot 2017-05-16 at 10.00.34 PM

The cell-security service leverages a common library that both cell-security servers and clients can leverage to fulfill both ends of the contract. The contract being defined via some general modeling (below) and standard server/client REST contracts that can easily be exposed in any new “service” via including the library and adding some spring @[secConfig] annotations in an app’s configuration classes.

  • Securable: a securable is something that can have access to it gated by a SecurityService. Securables can be part of a chain to implement inheritance or any strategy one needs.
  • Accessor: is something that can potentially access a securable
  • ACL: Binds an Accessor to a Securable with a set of Permissions for a given context and optional expression to evaluate against the Securable
  • SecurableLocator: given a securable‘s guid, can retrieve a Securable or a chain of Securables
  • AccessorLocator: given a accessor‘s guid, can retrieve the Accessor
  • AccessorLocatorRegistry: manages information about available AccessorLocators
  • SecurableLocatorRegistry: manages information about available SecurableLocators
  • ACLService: provides access to manage ACLs
  • PrincipalService: provides access to manage Principals
  • LocatorMetadataService: provides access to manage meta-data about Securable|Accessor Locators
  • ACLExpressionEvaluator: evaluates ACL expressions against a Securable
  • SecurityService:  Checks access to a Securable for a requesting Accessor

The model above is expressed via standard REST contracts and interfaces in code, that are to be fulfilled by a combination of default implementations and those customized by individual application CELL services who wish to leverage the security framework. There are also a few re-usable cell-security persistence libraries we created to let services that leverage this to their persist security data (both authoritative and local consumer caches) across various databases (Mongo DB and or JPA etc). As well a another library to hook into streams of security events that flow through CELL’s Kakfa event bus.

Spring Cloud impressions

When I started using Spring Cloud (in the early days of the Brixton release), I developed a love – hate relationship with it. After a few initial early successes with a few simple prototypes I was extremely impressed with the discovery, configuration and abstract “service name” based way of access peer services (via feign clients bound to the discovery services)…. you could quickly see the advantageous to using these libraries to really build a true platform that could scale to N in several different ways and take care of a lot of the boilerplate “microservices” stuff for you.

That said, once we really got into the developing CELL we ended up having two development paths.

The first being one team working on creating a set of re-usable libraries for CELL applications to leverage and integrate into the CELL microservice ecosystem. This consisted of creating several abstractions that would bring together some of the required spring cloud libraries, pre-integrated via base configuration for CELL, and just make it easier to “drop-in” to a new CELL app without having to wade into the details of spring cloud too much and just let the service developer focus on their service. The amount of time on this part was about 70% of the development effort, heavily front loaded in the start of the project.

The second being the other team using the latter to actually build the business logic services, which was the whole point of this thing in the first place. This accounted for about 30% of the work in the beginning and today… about 80-90% of the work now that the base framework of CELL is established.

The hate part (well not true hate, but you know what I mean… friendly frustration) of this ended up being the amount of man hours spent in the start of the project dealing/learning spring-cloud. There is a tangible learning curve to be aware of. Working around bugs, finding issues in spring-cloud, both real ones or just working through perceived ones via misunderstandings due to the complexity of spring-cloud itself.

I’m not going to go into each specific issue here, however there were simply a lot of issues and time spent debugging spring cloud code trying to figure out why certain things failed or to learn how they behaved so we could customize and properly configure things. In the end most of the issues could be worked around or were not that hard to fix…. its just the time it took to figure out the underlying causation’s, produce a reproducible sample and then convey it to the spring-cloud developers to get help with. (The spring-cloud developers BTW are excellent and VERY responsive) kudos to them for that.

Lastly, taking each CELL artifact (jar) and getting it wrapped up in a Docker container was not an huge ordeal. In the deployed footprint, each CELL artifact is a separate Docker Swarm Service that is deployed on its own overlay network (separate one per CELL version). As stated previously, the CELL router (Zuul) is the only service necessary to be exposed on a published swarm port and then upstream datacenter load balancers can just point to that.

So would I recommend Spring-Cloud?

Yes. Spring Cloud at its heart is really an pretty impressive wrapper framework around a lot of other tools that are out there for microservices. It has a responsive and helpful community. (definitely leverage Gitter.im if you need help!) The project has matured considerably since I first used it and many of the issues I was dealing with are now fixed. Compared to writing all the necessary things to have a robust microservices ecosystem yourself….. I’ll take this framework any day.

Final note. I would NOT recommend using spring-data-rest. We used that on a few of the CELL application logic services and its main benefit of providing you a lot of CRUD REST services in a HATE-OS fashion…. its just not that easy to customize the behavior of, has a lot of bugs and just generally was a pain to work with. At the end of the day it would have just been easier to code our own suite of CRUD services instead of relying on it.

 

 

Advertisements

Book review: Building Microservices

Screen Shot 2015-04-06 at 10.11.15 PMRecently I read Sam Newman’s “Building Microservices” , at ~280 pages its a fairly quick read. The reviews on this book overall are mixed and I can see where readers are coming from. By the title of this book one might expect some coverage of some of the microservices frameworks out there, concrete examples, maybe some actual code… but you won’t really find that here. Instead you will find a pretty good overview of various architectural approaches to modern application design in today’s world; covering general topics such a proper separation of concerns, unit-testing, continuous integration, automation, infrastructure management, service discovery, fault tolerance, high-availability and security etc.

In reality, none of the principles covered in this book are the exclusive domain of “microservice” application architectures, but rather can (and should be) applied to any application you are considering deploying; whether its a “monolithic” application or a suite of microservices interacting as parts of a larger functioning application.

In that right I think this book is definitely a good read and worth a look, if for nothing more than to ensure your team gets a refresher on good design principles and how they can be materialized with some of the newer frameworks and tool sets that have come out of our community in recent years. The material presented is sound.

Execute Powershell commands via Node.js, REST, AngularJS

Building on my last post on stateful-process-command-executor this post will cover how you can leverage that node.js module to expose the capabilities of Powershell cmdlets over a REST api presented through an AngularJS interface.  Why would one want to do this you ask? Well I’ve covered this in my last post but I will briefly explain it here.

(Note, what is described below could just as easily be built for Bash processes as well as the underlying module supports it)

The use case came out of the need to automate certain calls to manage various objects within Microsoft o365’s environment. Unfortunately Microsoft’s GraphAPI, does not expose all of the functionality that is available via its suite of various Powershell cmdlets for o365 services. Secondly when you need to do these operations via Powershell, its requires a per-established remote PSSession to o365…. and establishing (and tearing down) a new remote PSSession whenever you need to invoke a cmdlet against a remote resource (remote server, or o365 endpoint) is expensive. Lastly, who wants to actually sit there and manually run these commands when they could be automated and invoked on demand via other means… such as via a web-service etc. Hence this is how stateful-process-command-proxy came to be… it provides the building block bridge between node.js and a pool of pre-established Powershell consoles. Once you have node.js talking to stateful-process-command-proxy, you can build whatever you want on top of that in node.js to mediate the calls.

Layer one

The first higher level NPM module that builds on stateful-process-command-proxy is powershell-command-executor

What this adds on top of stateful-process-command-proxy is probably best described by this diagram:

 

So the main thing to understand is that the module provides the PSCommandService class which takes a registry of pre-defined “named” commands and respective permissible arguments. The registry is nothing more than a object full of configuration and is easy to define. You can see an example here in the project which defines a bunch of named “commands” and their arguments usable for o365 to manipulate users, groups etc.  PSCommandService is intended to serve as a decoupling point between the caller and the StatefulProcessCommandProxy… in other words a place where you can restrict and limit the types of commands, and arguments (sanitized) that can ever reach the Powershell processes that are pooled within StatefulProcessCommandProxy.

It is PSCommandService‘s responsibility to lookup the named command you want to execute, sanitize the arguments and generate a literal Powershell command string that is then sent to the StatefulProcessCommandProxy to be execute. StatefulProcessCommandProxy, once the command is received is responsible for checking that the command passes its command whitelist and blacklist before executing it. The sample o365Utils.js config file provides a set of pre-canned (usable) examples of init/destroy commands, auto-invalidation commands and whitelist/blacklist configs that you can use when constructing the StatefulProcessCommandProxy that the PSCommandService will use internally.

Layer two

The next logic step is to expose some sort of access to invoking these pre-canned “commands” to callers. One way to do this is via exposing it via a web-service.

WARNING: doing such a thing, without much thought can expose you to serious security risks. You need to really think about how you will secure access to this layer, the types of commands you expose, your argument sanitiziation and filtering of permissible commands via whitelists and blacklists etc for injection protection. Another precaution you may want to take is running it only on Localhost for experimental purposes only. READ OWASPs article on command injection.

Ok with that obvious warning out of the way here is the next little example project which provides this kind of layer that builds on top of the latter: powershell-command-executor-ui

This project is a simple Node.js ExpressJS app that provides a simple set of REST services that allows the caller to:

  • get all available named commands in the PSCommandService registry
  • get an individual command configuration from the registry
  • generate a command from a set of arguments
  • execute the command via a set of arguments and get the result
  • obtain the “status” of the underlying StatefulProcessCommandProxy and its history of commands

Given the above set of services, one can easily build a user-interface which dynamically lets the user invoke any command in the registry and see the results… and this is exactly what this project does via an AngularJS interface (albeit a bit crude…). See diagrams below.

Hopefully this will be useful to others out there, enjoy.

 

 

 

Generating Java classes for the Azure AD Graph API

NOTE: I’ve since abandoned this avenue to generate pojos for the GraphAPI service. The Restlet Generator simply has too many issues in the resulting output (i.e. not handling package names properly, generics issues, not dealing with Edm.[types] etc). However this still may be of use to someone who wants to explore it further

Recently had to write some code to talk to the Azure AD Graph API. This is a REST based API that exchanges data via typical JSON payloads. For those having to a Java client to talk to this, a good starting point is taking a look at this sample API application to get your feet wet. However to those familiar in Java, this code is less that desirable;  however it has no dependencies which is nice.

When coding something against a REST service its often nice to have a set of classes that you can marshal to/from the JSON payloads to you are interacting with.  Behind the scenes it appears that this Azure Graph API is an OData app, which does present “$metadata” about itself… cool! So now we can generate some classes…..

https://graph.windows.net/YOUR_TENANT_DOMAIN/$metadata

OR

https://graphregistry.cloudapp.net/GraphRegistry.svc/YOUR_TENANT_DOMAIN/$metadata

So what can we use to generate some Java classes against this? Lets use the Restlet OData Extension. This Restlet extension can generate code off OData schema documents.  You will want to follow these instructions as well for the code generation.

IMPORTANT: You will also need a fork/version of Restlet that incorporates this pull request fix for a NullPointer that you will encounter with the code generation. (The bug exists in Restlet 2.2.1)

The command I ran to generate the Java classes for all the Graph API entities was as follows (run this from WITHIN the lib/ directory in the extracted Restlet zip/tarball you downloaded). In the command below, do NOT specify “$metadata” in the Generator URI as the tool appends that automatically.

java -cp org.restlet.jar:org.restlet.ext.xml.jar:org.restlet.ext.atom.jar:org.restlet.ext.freemarker.jar:org.restlet.ext.odata.jar:org.freemarker_2.3/org.freemarker.jar org.restlet.ext.odata.Generator https://graph.windows.net/YOUR_TENANT_DOMAIN/ ~/path/to/your/target/output/dir

Pitfalls:

  •  If you run the command and get the error “Content is not allowed in prolog.” (which I did). There might be some extra characters that are being prepended to the starting “<edmx:Edmx”  in the “$metadata” schema document that this endpoint is returning. If this is the case do the following:
    • Download the source XML $metadata document to your hard-drive
    • Open it up in a editor and REMOVE the “<?xml version=”1.0″ encoding=”utf-8″?>” that precedes the first “<edmx…” element
    • Next, just to ensure there are no more hidden chars at the start of the document, open up a hex editor on the document to get rid of any other hidden chars that precede the first “<edmx…”  element.
    • Save the changes locally, and fire-up a local webserver (or a webserver that lives anywhere) and set it up so that http://yourwebserver/whatever/$metadata will serve up that XML file.
    • Then proceed to alter the Restlet Generator command above to reference this modified URI as appropriate. Remember that you do NOT specify “$metadata” in the Generator URI as the tool appends that automatically.

Dropwizard Java REST services

To sum it up; Dropwizard rocks.

I’ve done quite a bit of WS development both the on client side and server side; interacting with both SOAP, REST and variants of XML/JSON RPC hybrid services. For my latest project I need to expose a set of REST services to a myriad of clients: phones, fat JS clients etc. This application also needs to talk to other nodes or “agents” that are also doing work in a distributed cloud environment. The core engine of this application really has no need for a bloated higher level MVC/gui supporting stack and bringing that into this library would just be a pain. I’ve always like the simplicity of being able to skip the whole JEE/Spring/Tomcat based container stack and just do a plain old “java -jar”… run my application…. but the reality of being able to do that has been lacking… until now.

In looking at the available options for picking the framework to use (such as Restlet, Spring-MVC REST, Spring Data REST etc and others), I immediately became discouraged when looking at examples for setting them up; they are full of complexity and lots of configuration and generally require a full application server container to run within, which just adds further complexity to your setup.

Then I stumbled across Dropwizard by the folks at Yammer. I encourage everyone reading this to just try the simple Hello World example they have on their site. If you have any experience in this space and an appreciation and vision for decoupling; you will immediately recognize the beauty of this little framework and the power it can bring to the table from a deployment standpoint. Build your core app engine back-end library as you normally would, toss in Dropwizard, expose some REST services to extend your interfaces to the outside world; throw it up on a server and “java -jar myapp server myconfig.yml” and you are ready to rock. (they make this possible by in-lining Jetty). Create a few little JS/HTML files for a fat JS client, (i’d recommend Angular) and hook into your REST services and you will have an awesome little decoupled application.

Integrating Restlet with Spring

For those of you out there who would like to get Restlet 2.0 (currently the M5) release up integrated with your existing Spring application, hopefully this post will be of some help. I recently had to do this and unfortunately the documentation related to Spring integration on the Restlet site is scattered across various docs and some of it appears out of date. What I am describing below worked with source code straight from the Restlet SVN trunk just before the M5 release so you should be good to go if you use the M5 release (JEE edition)

First off, I am assuming you have an existing web application with a web.xml file and are using Spring. Secondly I am just trying to give you some working web.xml and the corresponding Spring configuration to get up and running. I am not explaining the details of how Restlet works as you can find that on the Restlet site.

First you will want to make sure you have the Restlet JEE 2.0 M5 edition. Make sure you grab the JEE version and not the JSE version as the latter does not include the Spring integration extension. Once downloaded extract the ZIP to a location on your drive. The JEE zip package contains a ton of Restlet Jar files. The three we care about are org.restlet.jar, org.restlet.ext.spring.jar, org.restlet.ext.servlet.jar

If you are using Maven you can add the following repository and dependencies to your POM by using the repository instructions on the Restlet site. NOTE as of today: the Restlet repository currently does NOT have the M5 release up there so you are going to manually have to add the M5 jar to your repository by doing the following for each of the 3 jars.

mvn install:install-file -Dfile=/PATH/TO/RESTLET-m5-ZIP-EXTRACT-DIR/lib/org.restlet.jar -DgroupId=org.restlet -DartifactId=org.restlet -Dversion=2.0-SNAPSHOT-M5 -Dpackaging=jar

mvn install:install-file -Dfile=/PATH/TO/RESTLET-m5-ZIP-EXTRACT-DIR/lib/org.restlet.ext.spring.jar -DgroupId=org.restlet -DartifactId=org.restlet.ext.spring -Dversion=2.0-SNAPSHOT-M5 -Dpackaging=jar

mvn install:install-file -Dfile=/PATH/TO/RESTLET-m5-ZIP-EXTRACT-DIR/lib/org.restlet.ext.servlet.jar -DgroupId=org.restlet -DartifactId=org.restlet.ext.servlet -Dversion=2.0-SNAPSHOT-M5 -Dpackaging=jar

The above 3 commands will manually install the three Jars into your Maven repository. Next you can configure your POM to add the official Maven repository plus the dependencies to the 3 Jars you installed above. Note that the repository entry is sort of meaningless at this point because you manually installed the jars above. It is IMPORTANT that the version elements in your dependencies below MATCH exactly the versions you specified in the commands above!

	<repository>  
    	<id>maven-restlet</id>  
    	<name>Public online Restlet repository</name>  
    	<url>http://maven.restlet.org</url>  
	</repository>

	<dependency>
    	<groupId>org.restlet</groupId>
    	<artifactId>org.restlet</artifactId>
    	<version>2.0-SNAPSHOT-M5</version>
	</dependency>

	<dependency>
    	<groupId>org.restlet</groupId>
    	<artifactId>org.restlet.ext.spring</artifactId>
    	<version>2.0-SNAPSHOT-M5</version>
	</dependency>

	<dependency>
    	<groupId>org.restlet</groupId>
    	<artifactId>org.restlet.ext.servlet</artifactId>
    	<version>2.0-SNAPSHOT-M5</version>
	</dependency>

Ok, great. Next we need to configure your web.xml, open it up and add the following entries in the appropriate spots:

  	<servlet>
      	<servlet-name>myRESTApi</servlet-name>
      	<servlet-class>org.restlet.ext.spring.SpringServerServlet</servlet-class>
      	 <init-param>
                <param-name>org.restlet.component</param-name>
                 <!-- this value must match the bean id of the Restlet component you will configure in Spring (below) -->
                <param-value>restletComponent</param-value>
         </init-param>
  	</servlet>

  	<servlet-mapping>
        <servlet-name>myRESTApi</servlet-name>
        <url-pattern>/my/REST/api/*</url-pattern>
  	</servlet-mapping>

Now your web.xml is configured to take all requests to /my/REST/api/* and send those to a Restlet Component which you will wire up in your Spring configuration. So... bring up your applicationContext.xml or whatever you have it named and add the following entries:


<!-- our SpringComponent which binds us to the Restlet servlet configured above -->
<bean id="restletComponent" class="org.restlet.ext.spring.SpringComponent">
         <!-- the defaultTarget for this component is our Restlet Application -->
	<property name="defaultTarget" ref="myRestletApplication" />
</bean>

<!-- your Restlet application. This class extends "org.restlet.Application" -->
<bean id="myRestletApplication" class="my.restlet.MyRestletApplication">
         <!-- all requests to this Application will be sent to myPath2BeanRouter -->
	<property name="root" ref="myPath2BeanRouter"/>
</bean>

<!-- This router automagically routes requests to beans that extend org.restlet.resource.ServerResource or org.restlet.Restlet who's name starts with a "/" slash which matches the request-->
<bean name="myPath2BeanRouter" class="org.restlet.ext.spring.SpringBeanRouter"/>
 
<!-- This extension of org.restlet.resource.ServerResource bean will handle all requests to made to /my/REST/api/myResource (GET/POST/PUT etc) 
This class extends "org.restlet.Restlet" or "org.restlet.resource.ServerResource" -->
 <bean name="/myResource" autowire="byName" scope="prototype" 
    		class="my.restlet.package.resources.MyResourceBean">
    		
    	<property name="somePropertyOfMine" ref="someOtherSpringBean"/>
 </bean>

Ok, well if you were having trouble trying to get Spring working with Restlet I hope this helped get you rolling. Restlet is a cool project that works great and can get a REST API up and running pretty quickly (granted you are good at crawling through somewhat scattered documentation) Here are a few other links which you may want to reference:

Restlet 2.0 Extensions API
Restlet 2.0 JEE API

Also, I am posting the following error messages that troubled me when trying to get this to work. The configuration I show above was the result of getting beyond the below errors by using the correct fixed releases. To AVOID the errors below, ENSURE you are using Restlet 2.0 M5 or a custom build from the trunk. Prior to 9/25/09 people were getting the errors below.

Message ID
No target class was defined for this finder
Complete Message org.restlet.ext.spring.SpringFinder $$ EnhancerByCGLIB

Review: RESTful Web Services

restThis is a book review for “RESTful Web Services” by Leonard Richardson and Sam Ruby

If you want to get under the hood and really understand how to properly implement a RESTful web service then this book is for you. The treatment of the topic is excellent. After reading this book, I feel that so many folks out there writing “REST” apis, myself included, have written variants of a REST-RPC hybrid, versus a true REST implementation. REST after all in its true form is really a total change of mindset when it comes to creating a web-service and this book helps you get there. The author does not turn this into a REST vs. SOAP discussion but rather preps the reader quite well by exploring the history of web-services and very clearly explaining the differences between all the different approaches, and positives/negatives of each (RPC, REST-RPC, SOAP, WS-*, etc) Secondly the author give a primer on the basic tools used to implement REST clients in various languages by covering the various popular HTTP client libs out there (cURL, Apache HttpClient, rest-open-uri, libcurl etc). The author also presents a discussion and explanation of “Resource Oriented Architecture” and walks the reader through several example implementations (both Read-only and read-write) as well as the RESTful thought process behind the design. The author gives excellent treatment to the differences between when to use a PUT vs. POST and explains the rules around making that decision, which, can be difficult to understand which to use in the real world. The book also shows various ways of implementing a REST service when some of the HTTP methods simply are not available (primarily PUT/DELETE) due to your HTTP server setup. Lastly the book covers some server-side frameworks that are available and the one I found the most interesting, which I hope to work with soon, is the Java project called Restlet.

Readers will also find the HTTP status code and headers reference very valuable, as for each status code/header, the author gives very clear descriptions of their meaning in a RESTful web-service. Very valuable!

Recommeded? Yes, go get it today.
Skills: Java/Ruby/Python – intermediate to advanced.

Advanced HTTP operations in Flex outside of AIR

I am currently pretty deep into a Flex/AS3 RIA desktop app project whereby I have several advanced needs. Such as to download partial files to the desktop (byte-range requests), execute HEAD requests to get remote file sizes, execute multipart POSTs, talk to some REST apis, and finally be able to read any write HTTP headers….. outside of AIR? Good luck with URLStream, URLRequest, URLRequestMethod and URLRequestHeader

Due to security sandbox restrictions, unless your application is running within Adobe AIR you are restricted to simple GETs and POSTs. You are also NOT allowed to touch the following headers:

Accept-Charset, Accept-Encoding, Accept-Ranges, Age, Allow, Allowed, Authorization, Charge-To, Connect, Connection, Content-Length, Content-Location, Content-Range, Cookie, Date, Delete, ETag, Expect, Get, Head, Host, Keep-Alive, Last-Modified, Location, Max-Forwards, Options, Origin, Post, Proxy-Authenticate, Proxy-Authorization, Proxy-Connection, Public, Put, Range, Referer, Request-Range, Retry-After, Server, TE, Trace, Trailer, Transfer-Encoding, Upgrade, URI, User-Agent, Vary, Via, Warning, WWW-Authenticate, x-flash-version.

That’s basically all of the ones I needed to touch…..so what to do? In my case I was developing some file management functionality that runs in both Adobe AIR as well as MDM Zinc. Amongst other protocols such as FTP, I needed to be able to pull file updates over HTTP as well (+authentication, HEAD checks etc). So I had to create an generic IHttpClient interface abstraction which then allowed me to implement runtime specific clients. So for the AIR side of things I was good to go by using all of Adobe’s URL* classes right out of the box and they work great for the AirHttpClient implementation of my interface. However for my Zinc client I was still stuck, until I ran across a fantastic little HTTP library out there that provides a custom HTTP library built on top of the Flex socket stack.

The AS3 library that ended up being used in my ZincHttpClient was: as3httpclientlib. as3httpclientlib solved all of my problems when running outside of the AIR environment. With as3httpclientlib you can do all of the following and more:

* GET, HEAD, PUT, POST, DELETE
* multipart/form-data (PUT and POST)
* HTTPS support using AS3Crypto TLS
* Post with application/x-www-form-urlencoded
* Reading chunked (Transfer-Encoding)

If you have the need to access a bit more advanced HTTP functionality and need to do it outside of the AIR runtime. I HIGHLY recommend as3httpclientlib. I have put this library through the works doing byte range requests, HEAD requests, header manipulation and downloading and posting all sorts of file sizes, the library works great. My only note is that it is a tad slower than URLStream based HTTP downloads under Adobe AIR, regardless, this little HTTP library is worth it. Kudos to the AS3HttpClientLib team!