Ideas worth spreading

If you haven’t been exposed to Nathan Marz’s ideas on Big Data, the following links are definitely worth your time:

http://manning.com/marz/

http://www.infoq.com/presentations/Complexity-Big-Data

http://nathanmarz.com/speaking/

Processing ModSecurity audit logs with Fluentd

Recently had a need to take tons of raw ModSecurity audit logs and make use of them. First used Logstash and then attempted with Apache Flume (see previous articles). Next in line was Fluentd which is what this article is about, long story short I ended up just having to write a Fluentd output plugin to take the output from the tail multiline plugin and then format it into a more structured first class object that looks like the below example.

The Modsecurity Fluentd plugin is located here on Github: https://github.com/bitsofinfo/fluentd-modsecurity

  1. Get some audit logs generated from modsecurity and throw them into a directory
  2. Edit your fluentd config file and customize its input to use the tail multiline plugin and then the modsecurity plugin, an example is here.
  3. Customize your output(s)
  4. On the command line:  “fluentd ./fluent.conf -vv” 

This was tested against the latest version of Fluentd available at the time of this article.

The end result of it is that with this configuration, your raw Modsec audit log entries, will end up looking something like this JSON example below. Again this is just how I ended up structuring the fields via the filters, you can fork and modify the plugin as you see fit to output a different format, or even make it more configurable

EXAMPLE JSON OUTPUT, using https://github.com/bitsofinfo/fluentd-modsecurity

{
"modsec_timestamp":"08/Nov/2013:06:22:59 --0400",
"uniqueId":"C5g8kkk0002012221222",
"sourceIp":"192.168.1.22",
"sourcePort":"34156",
"destIp":"192.168.0.2",
"destPort":"80",
"httpMethod":"GET",
"requestedUri":"/myuri/x",
"incomingProtocol":"HTTP/1.1",
"myCookie":"myCookie=testValue",
"requestHeaders":{
"Host":"accts.x4.bitsofinfo2.com",
"Connection":"keep-alive",
"Accept":"*/*",
"User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) Safari/537.36",
"Referer":"https",
"Accept-Encoding":"gzip,deflate,sdch",
"Accept-Language":"en-US,en;q=0.8",
"Cookie":"myCookie=testValue; myapp_sec_7861ac9196050da; special=ddd",
"Incoming-Protocol":"HTTPS",
"X-Forwarded-For":"192.1.33.22"
},
"XForwardedFor":"192.1.33.22",
"XForwardedFor-GEOIP":{
      "country_code":"TW",
      "country_code3":"TWN",
      "country_name":"Taiwan",
      "region":"03",
      "region_name":"T'ai-pei",
      "city":"Taipei",
      "latitude":25.039199829101562,
      "longitude":121.5250015258789
   },
"serverProtocol":"HTTP/1.1",
"responseStatus":"200 OK",
"responseHeaders":{
"Vary":"Accept-Encoding",
"Expires":"Fri, 08 Aug 2014 10",
"Cache-Control":"public, max-age=31536000",
"Content-Encoding":"deflate",
"Content-Type":"application/x-javascript; charset=UTF-8",
"Set-Cookie":"zippy=65.sss31; path=/; domain=accts.x4.bitsofinfo2.com",
"Connection":"close",
"Transfer-Encoding":"chunked"
},
"auditLogTrailer":{
"Apache-Handler":"proxy-server",
"Stopwatch":"1375957379601874 39178 (989 4992 -)",
"Producer":"ModSecurity for Apache (http://www.modsecurity.org/); core ruleset",
"Server":"Apache (party6)",
"messages":[
{
"info":"Warning 1. Operator EQ matched 0 at GLOBAL.",
"file":"/etc/d4/modsechttp_policy.conf",
"line":"120",
"id":"960903",
"msg":"ModSecurity does not support content encodings",
"severity":"WARNING"
},
{
"info":"Warning 2. Operator EQ matched 0 at GLOBAL.",
"file":"/etc/d4/modsechttp_policy.conf",
"line":"120",
"id":"960903",
"msg":"ModSecurity does not support content encodings",
"severity":"WARNING"
}
]
},
"event_date_microseconds":1.375957379601874e+15,
"event_date_milliseconds":1375957379601.874,
"event_date_seconds":1375957379.601874,
"event_timestamp":"2013-08-08T10:22:59.601Z",
"secRuleIds":[
"960011",
"960904",
"960903"
],
"matchedRules":[
"SecRule \"REQUEST_METHOD\" \"@rx ^(?:GET|HEAD)$\" ",
"SecRule \"&REQUEST_HEADERS:Content-Type\" \"@eq 0\" \"phase:2,deny,status:406,t:lo",
"SecRule \"REQUEST_FILENAME|ARGS|ARGS_NAMES|REQUEST_HEADERS|XML:/*|!REQUEST_HEADERS:Ref",
"SecAction \"phase:2,status:406,t:lowercase,t:replaceNulls,t:compres",
"SecRule \"&GLOBAL:alerted_960903_compression\" \"@eq 0\" \"phase:2,log,deny,status:406,t:lower"
]
}

Tagged , , , ,

Deserializing Modsecurity Audit logs with Apache Flume

This post will be updated in the coming days/weeks, however when looking at using Apache Flume to ingest some ModSecurity Audit logs, it quickly became apparent that Flume’s SpoolingDirectorySource lacked the ability to de-serialized “events” from a file that spanned many “new lines” (\n). Lacking this support, and seeing that an outstanding ticket already existed on a related subject at https://issues.apache.org/jira/browse/FLUME-1988 I went ahead and coded one up.

Please see RegexDelimiterDeSerializer and its corresponding unit test attached to FLUME-1988. Hopefully this can be included in an actual Flume release. In the meantime you should be able to include this and the related classes in a local copy of the Flume source and do your own build to get this functionality. The net result of using this regex patch is that each ModSecurity audit log entry (that spans many lines) will be summarized into *one* flume message. What you do next is up to you, however the next best thing is to pump this into the Flume Morphline Interceptor to then begin grokking and parsing the raw multi-lined modsec event. Note there are some possible synergies and re-use of regexes once you start using Morphlines and the Grok patterns we came up with for use with my Logstash based solution.

a) clone the official Flume source code

b) Drop in the files attached to FLUME-1988 into your cloned source of Flume

c) Follow the instructions located here to modify the source so that you can have a flume snapshot distro, that contains all the dependencies for Morphline (https://groups.google.com/a/cloudera.org/d/msg/cdk-dev/7T4pTebdWN4/sBHGkoS70LkJ)

d) From the root of the flume project run “mvn install -DskipTests=true” and take the tarball generated in “flume-ng-dist/target” and copy it somewhere else. (this is the freshly built Flume dist w/ the regex deserializer support)

e) Go to where you extracted the distro, widdle up your own flume config file and morphline config using the snippets below and then run “bin/flume-ng agent –conf conf –conf-file conf/flume.conf -Dflume.root.logger=DEBUG,console -n agent”

Here is a sample flume config snippet that uses this:


agent.sources = src1
agent.channels = memoryChannel
agent.sinks = loggerSink

# For each one of the sources, the type is defined
agent.sources.src1.type = spooldir
agent.sources.src1.channels = memoryChannel
agent.sources.src1.spoolDir = /path/to/my_modsec_logs
agent.sources.src1.deserializer = REGEX
agent.sources.src1.deserializer.outputCharset = UTF-8
agent.sources.src1.deserializer.eventEndRegex = --[a-fA-F0-9]{8}-Z--
agent.sources.src1.deserializer.includeEventEndRegex = true

agent.sources.src1.interceptors = morphlineinterceptor
agent.sources.src1.interceptors.morphlineinterceptor.type = org.apache.flume.sink.solr.morphline.MorphlineInterceptor$Builder
agent.sources.src1.interceptors.morphlineinterceptor.morphlineFile = /path/to/conf/morphline.conf
agent.sources.src1.interceptors.morphlineinterceptor.morphlineId = morphline1

 

Next is a sample “morphline.conf” configuration which will just emit each ModSecurity message from the audit log to standard out when running Flume. You can do the rest from there (have fun parsing). Please refer to the following morphlines documentation:

morphlines : [
  {
    id : morphline1
    importCommands : ["com.cloudera.**"]

    commands : [
      {
        readMultiLine {
          regex: ".*"
          charset : UTF-8
        }
      }

      # log the record at DEBUG level to SLF4J
      { logDebug { format : "output record: {}", args : ["@{}"] } }

    ]
  }
]
Tagged , , , ,

Logstash for ModSecurity audit logs

Recently had a need to take tons of raw ModSecurity audit logs and make use of them. Ended up using Logstash as a first stab attempt to get them from their raw format into something that could be stored in something more useful like a database or search engine. Nicely enough, out of the box, Logstash has an embeddable ElasticSearch instance that Kibana can hook up to.  After configuring your Logstash inputs, filters and outputs, you can be querying your log data in no time…. that is assuming writing the filters for your log data takes you close to “no time” which is not the case with modsec’s more challenging log format.

After searching around for some ModSecurity/Logstash examples, and finding only this one (for modsec entries in the apache error log), I was facing the task of having to write my own to deal w/ the modsecurity audit log format…..arrggg!

So after a fair amount of work, I ended up having a Logstash configuration that works for me… hopefully this will be of use to others out there as well. Note, this is certainly not perfect but is intended to serve as an example and starting point for anyone who is looking for that.

The Modsecurity Logstash configuration file (and tiny pattern file) is located here on Github: https://github.com/bitsofinfo/logstash-modsecurity

  1. Get some audit logs generated from modsecurity and throw them into a directory
  2. Edit the logstash modsecurity config file (https://github.com/bitsofinfo/logstash-modsecurity) and customize its file input path to point to your logs from step (1)
  3. Customize the output(s) and review the various filters
  4. On the command line:  java -jar logstash-[version]-flatjar.jar agent -v -f  logstash_modsecurity.conf

This was tested against Logstash v1.2.1 and relies heavily on Logstash’s “ruby” filter capability which really was a lifesaver to be able to workaround some bugs and lack of certain capabilities Logstash’s in growing set of filters. I’m sure as Logstash grows, much of what the custom ruby filters do can be changed over time.

The end result of it is that with this configuration, your raw Modsec audit log entries, will end up looking something like this JSON example below. Again this is just how I ended up structuring the fields via the filters. You can take the above configuration example and change your output to your needs.

EXAMPLE JSON OUTPUT, using this Logstash configuration


{
  "@timestamp": "2013-09-17T09:46:16.088Z",
  "@version": "1",
  "host": "razzle2",
  "path": "\/Users\/bof\/who2\/zip4n\/logstash\/modseclogs\/proxy9\/modsec_audit.log.1",
  "tags": [
    "multiline"
  ],
  "rawSectionA": "[17\/Sep\/2013:05:46:16 --0400] MSZkdwoB9ogAAHlNTXUAAAAD 192.168.0.9 65183 192.168.0.136 80",
  "rawSectionB": "POST \/xml\/rpc\/soapservice-v2 HTTP\/1.1\nContent-Type: application\/xml\nspecialcookie: tb034=\nCache-Control: no-cache\nPragma: no-cache\nUser-Agent: Java\/1.5.0_15\nHost: xmlserver.intstage442.org\nAccept: text\/html, image\/gif, image\/jpeg, *; q=.2, *\/*; q=.2\nConnection: keep-alive\nContent-Length: 93\nIncoming-Protocol: HTTPS\nab0044: 0\nX-Forwarded-For: 192.168.1.232",
  "rawSectionC": "{\"id\":2,\"method\":\"report\",\"stuff\":[\"kborg2@special292.org\",\"X22322mkf3\"],\"xmlrpm\":\"0.1a\"}",
  "rawSectionF": "HTTP\/1.1 200 OK\nX-SESSTID: 009nUn4493\nContent-Type: application\/xml;charset=UTF-8\nContent-Length: 76\nConnection: close",
  "rawSectionH": "Message: Warning. Match of \"rx (?:^(?:application\\\\\/x-www-form-urlencoded(?:;(?:\\\\s?charset\\\\s?=\\\\s?[\\\\w\\\\d\\\\-]{1,18})?)??$|multipart\/form-data;)|text\/xml)\" against \"REQUEST_HEADERS:Content-Type\" required. [file \"\/opt\/niner\/modsec2\/pp7.conf\"] [line \"69\"] [id \"960010\"] [msg \"Request content type is not allowed by policy\"] [severity \"WARNING\"] [tag \"POLICY\/ENCODING_NOT_ALLOWED\"]\nApache-Handler: party-server-time2\nStopwatch: 1379411176088695 48158 (1771* 3714 -)\nProducer: ModSecurity for Apache\/2.7 (http:\/\/www.modsecurity.org\/); core ruleset\/1.9.2.\nServer: Whoisthat\/v1 (Osprey)",
  "modsec_timestamp": "17\/Sep\/2013:05:46:16 --0400",
  "uniqueId": "MSZkdwoB9ogAAHlNTXUAAAAD",
  "sourceIp": "192.168.0.9",
  "sourcePort": "65183",
  "destIp": "192.168.0.136",
  "destPort": "80",
  "httpMethod": "POST",
  "requestedUri": "\/xml\/rpc\/soapservice-v2",
  "incomingProtocol": "HTTP\/1.1",
  "requestBody": "{\"id\":2,\"method\":\"report\",\"stuff\":[\"kborg2@special292.org\",\"X22322mkf3\"],\"xmlrpm\":\"0.1a\"}",
  "serverProtocol": "HTTP\/1.1",
  "responseStatus": "200 OK",
  "requestHeaders": {
    "Content-Type": "application\/xml",
    "specialcookie": "8jj220021kl==j2899IuU",
    "Cache-Control": "no-cache",
    "Pragma": "no-cache",
    "User-Agent": "Java\/1.5.1_15",
    "Host": "xmlserver.intstage442.org",
    "Accept": "text\/html, image\/gif, image\/jpeg, *; q=.2, *\/*; q=.2",
    "Connection": "keep-alive",
    "Content-Length": "93",
    "Incoming-Protocol": "HTTPS",
    "ab0044": "0",
    "X-Forwarded-For": "192.168.1.232"
  },
  "responseHeaders": {
    "X-SESSTID": "009nUn4493",
    "Content-Type": "application\/xml;charset=UTF-8",
    "Content-Length": "76",
    "Connection": "close"
  },
  "auditLogTrailer": {
    "Apache-Handler": "party-server-time2",
    "Stopwatch": "1379411176088695 48158 (1771* 3714 -)",
    "Producer": "ModSecurity for Apache\/2.7 (http:\/\/www.modsecurity.org\/); core ruleset\/1.9.2.",
    "Server": "Whoisthat\/v1 (Osprey)",</pre>
    "messages": 
    	[
             {
      		"info": "Warning. Match of \"rx (?:^(?:application\\\\\/x-www-form-urlencoded(?:;(?:\\\\s?charset\\\\s?=\\\\s?[\\\\w\\\\d\\\\-]{1,18})?)??$|multipart\/form-data;)|text\/xml)\" against \"REQUEST_HEADERS:Content-Type\" required.",
      		"file": "\/opt\/niner\/modsec2\/pp7.conf",
      		"line": "69",
      		"id": "960010",
     		"msg": "Request content type is not allowed by policy",
      		"severity": "WARNING",
     		"tag": "POLICY\/ENCODING_NOT_ALLOWED"
    	    }
  	]
 },
 "event_date_microseconds": 1.3794111760887e+15,
 "event_date_milliseconds": 1379411176088.7,
 "event_date_seconds": 1379411176.0887,
 "event_timestamp": "2013-09-17T09:46:16.088Z",
 "XForwardedFor-GEOIP": {
 "ip": "192.168.1.122",
 "country_code2": "XZ",
 "country_code3": "BRZ",
 "country_name": "Brazil",
 "continent_code": "SA",
 "region_name": "12",
 "city_name": "Vesper",
 "postal_code": "",
 "timezone": "Brazil\/Continental",
 "real_region_name": "Region Metropolitana"
 },
 "matchedRules": [
 "SecRule \"REQUEST_METHOD\" \"@rx ^POST$\" \"phase:2,status:400,t:lowercase,t:replaceNulls,t:compressWhitespace,chain,t:none,deny,log,auditlog,msg:'POST request must have a Content-Length header',id:960022,tag:PROTOCOL_VIOLATION\/EVASION,severity:4\"",
 "SecRule \"REQUEST_FILENAME|ARGS|ARGS_NAMES|REQUEST_HEADERS|XML:\/*|!REQUEST_HEADERS:Referer\" \"@pm jscript onsubmit onchange onkeyup activexobject vbscript: <![cdata[ http: settimeout onabort shell: .innerhtml onmousedown onkeypress asfunction: onclick .fromcharcode background-image: .cookie onunload createtextrange onload <input\" \"phase:2,status:406,t:lowercase,t:replaceNulls,t:compressWhitespace,t:none,t:urlDecodeUni,t:htmlEntityDecode,t:compressWhiteSpace,t:lowercase,nolog,skip:1\"",
 "SecAction \"phase:2,status:406,t:lowercase,t:replaceNulls,t:compressWhitespace,nolog,skipAfter:950003\"",
 "SecRule \"REQUEST_HEADERS|XML:\/*|!REQUEST_HEADERS:'\/^(Cookie|Referer|X-OS-Prefs)$\/'|REQUEST_COOKIES|REQUEST_COOKIES_NAMES\" \"@pm gcc g++\" \"phase:2,status:406,t:lowercase,t:replaceNulls,t:compressWhitespace,t:none,t:urlDecodeUni,t:htmlEntityDecode,t:lowercase,nolog,skip:1\"",
 ],
 "secRuleIds": [
 "960022",
 "960050"
 ]
}

Tagged , , , ,

Dropwizard Java REST services

To sum it up; Dropwizard rocks.

I’ve done quite a bit of WS development both the on client side and server side; interacting with both SOAP, REST and variants of XML/JSON RPC hybrid services. For my latest project I need to expose a set of REST services to a myriad of clients: phones, fat JS clients etc. This application also needs to talk to other nodes or “agents” that are also doing work in a distributed cloud environment. The core engine of this application really has no need for a bloated higher level MVC/gui supporting stack and bringing that into this library would just be a pain. I’ve always like the simplicity of being able to skip the whole JEE/Spring/Tomcat based container stack and just do a plain old “java -jar”… run my application…. but the reality of being able to do that has been lacking… until now.

In looking at the available options for picking the framework to use (such as Restlet, Spring-MVC REST, Spring Data REST etc and others), I immediately became discouraged when looking at examples for setting them up; they are full of complexity and lots of configuration and generally require a full application server container to run within, which just adds further complexity to your setup.

Then I stumbled across Dropwizard by the folks at Yammer. I encourage everyone reading this to just try the simple Hello World example they have on their site. If you have any experience in this space and an appreciation and vision for decoupling; you will immediately recognize the beauty of this little framework and the power it can bring to the table from a deployment standpoint. Build your core app engine back-end library as you normally would, toss in Dropwizard, expose some REST services to extend your interfaces to the outside world; throw it up on a server and “java -jar myapp server myconfig.yml” and you are ready to rock. (they make this possible by in-lining Jetty). Create a few little JS/HTML files for a fat JS client, (i’d recommend Angular) and hook into your REST services and you will have an awesome little decoupled application.

Tagged , , , ,

Logging in Jboss sucks

This is a venting post.

I have a need to do the following. My app uses the Log4J API deployed on JBoss 6.1.x

A) Correlate log statements with a given thread of execution, logically identified by an arbitrary generated “request id” that my application sets at the entry point for a request, doing this using ThreadLocals. No, this is not a discussion for NDC/MDC.

B) These log statements will end up wherever the log system is configured to send them. I expect to be able to use that logging system’s documented APIs/configuration to add additional targets for log entries (appenders in log4j lingo)

C) Due to (A) and the sheer volume, I have the need to create a highly customized appender for log events that has access to my application’s context/services. The intent is to log these off asynchronously to a queue or a NoSql DB leveraging connections that my application already has handles on and configured itself. Hence I am not going to be able to reply on an instance of something that is instantiated and is outside of my application’s context. (i.e. I don’t want to configure this appender via the container’s default configuration mechanism, xml or whatever it is). I need to do this at runtime. My appender will need access to queues, and configuration, sources of connectivity that my application provides (not the deployment environment)

D) The logical approach would be to just use Log4J’s API and do something like:

Logger.getLogger("my.package.root").addAppender(new MySuperCustomizedAppender(configFromMyApp));

The above runs great in a unit test locally, I’m ready to go, lets deploy to JBoss!……oops wait, once deployed there nothing is happening.. whats going on??

E) When I do a Logger.getLogger() I get a nice instance of org.jboss.logmanager.log4j.BridgeLogger , looking at the source I see this little gem:

public void addAppender(final Appender newAppender) {
   // ignored
}

No warning. No Exception. No Message. Just silently discarded, letting you go ahead and waste your time trying to figure out what the hell is going on.

F) “Ok, someone else has certainly encountered this, lets google it”…. yep, doing a search on “BridgeLogger jboss addAppender” will yield you a whole lot of results and no definitive solution that really seems to do the job. Several forum posts going on and on with the lack of a satisfactory answer; several JIRA tickets documenting this issue etc. Overall the “workarounds” seem like way more trouble then they are worth (fiddling w/ jars, build/deploy routines, configurations etc on all app servers etc)

G) “Hmm, maybe I shouldn’t be using Log4J, how about Slf4J?”….. ok well, Slf4J provides a nice facade to logging. I’ll have to refactor a lot of my logging statements to use this API but it might be manageable. Then I can specifically target Log4j and bypass Jboss’s logging system…… oops. wait. no. JBoss also includes an SLF4J adapter of its own in the parent server level classpath, so if I create my own, since SLF4j does not permit multiple bindings, it will be a crap shoot for which one it picks…. Jboss’ or the one you want it to. Again the “solutions” for getting SLF4J to use your binding of choice over the one provided by JBoss seem like more “forum threads” with the lack of clear documented solution that does not involve your system administrators having to make farm wide config changes… @see F above. Forget it

H) Now what? Create my own “extension” of SLF4J or my own cloned interface of SLF4J that gives me some sort of hook that permits me to route log events over my customized “appender”? Yet more abstractions.

Tagged

Securing Foscam IP camera access over SSL with Apache reverse proxying

UPDATED: 9/27/13  (The solution below does not include audio support; for audio over stunnel please see this post over at warped.org)

Recently I was assisting a local business setup their Foscam IP cameras and make them remotely accessible for monitoring purposes from anywhere in the world. The particular models they had installed are the FI8910W line. These camera’s are pretty cool and for ~$100 retail they are a pretty good deal in my opinion. The cameras can be accessed via a browser over HTTP and also support a rudimentary HTTP/CGI API. However one of the biggest issues with these cameras security wise is the lack of SSL support. The embedded webserver on these things only supports HTTP and basic auth in the clear which, outside of your local network is not a good thing if your requirements is to be able to view/manage them remotely from over the internet.

One solution for this is to simply front all access to your cameras with a SSL secured reverse proxy. We did this using Apache’s mod_proxy. I’m not going to go into every detail of how to do this below, but the point is to give the reader a starting point. You can lookup the details on all these apache configuration specifics elsewhere on the web, there are tons of examples out there.

The example below would be for securing access to 2 (two) Foscam IP cameras on your local network, living on an example subnet 192.168.1.0. It assumes the local network is fronted by a router that supports port forwarding, which most consumer/business routers do. The end objective here is that when you access https://myproxy.host.com:10001 you will be accessing CAM1 and when you access https://myproxy.host.com:10002 you will be accessing CAM2.

Secondarily you can also set it up so that you could hit CAM1 at https://myproxy.host.com:10000/cam1/ and CAM2 at https://myproxy.host.com:10000/cam2/

  1. CAM1 IP = 192.168.1.100 listening on port 80
  2. CAM2 IP = 192.168.1.101 listening on port 80
  3. Reverse Proxy Server = 192.168.1.50 listening on ports 10000, 10001, 10002
  4. Router IP address: 192.168.1.1  configured with port forwarding as follows: Port 10000 -> 192.168.1.50:10000, 10001 -> 192.168.1.50:10001 and 10002 -> 192.168.1.50:10002

OVERVIEW

  • First off you need to setup a computer/server running Apache. The Apache webserver is available for almost every operating system known to man from linux to windows, to os-x. This server’s IP address is 192.168.1.50 and ensure that name based virtual host support is enabled as well as mod_ssl.
  • Next ensure that apache is listening on all the necessary ports (the 3 mentioned above). You will want to have Apache listen on a separate unique port for each IP Camera it is proxying access to, or at least one unique port if you are proxying the cameras of of a sub-path: For this example we are assigning port 10000 -> [CAM1 & CAM2 via sub-dir proxies], port 10001->CAM1 only and 10002->CAM2 only. Within your apache configuration you will want to ensure that you have statements like the following configured:
NameVirtualHost *:10000
NameVirtualHost *:10001
NameVirtualHost *:10002
Listen 10000
Listen 10001
Listen 10002
  • Now that Apache is configured to listen on the necessary ports, we need to configure the actual virtual hosts and the reverse proxying directives within each host, see the example below:
###############################
# Reverse proxy config for BOTH
# CAMs (1 & 2) via sub-paths
# @ 192.168.1.100
###############################
<VirtualHost 192.168.1.50:10000>
 ProxyRequests Off
 ProxyPreserveHost On
 ProxyVia On
 <Proxy *>
 Order deny,allow
 Allow from all
 </Proxy>

 # CAM1 (note trailing / is important)
 ProxyPass /cam1/ http://192.168.1.100:80/
 ProxyPassReverse /cam1/ http://192.168.1.100:80/

 # CAM2 (note trailing / is important)
 ProxyPass /cam2/ http://192.168.1.101:80/
 ProxyPassReverse /cam2/ http://192.168.1.101:80/

 CustomLog /path/to/apachelogs/access_cam1.log combined
 ErrorLog /path/to/apachelogs/error_cam1.log
 ServerName cam3

 SSLEngine On
 SSLCertificateFile /path/to/sslcert/mysslcert.crt
 SSLCertificateKeyFile /path/to/sslkey/sslkey.key

 <FilesMatch "\.(cgi|shtml|phtml|php)$">
 SSLOptions +StdEnvVars
 </FilesMatch>
 <Directory /usr/lib/cgi-bin>
 SSLOptions +StdEnvVars
 </Directory>

 BrowserMatch "MSIE [2-6]" \
 nokeepalive ssl-unclean-shutdown \
 downgrade-1.0 force-response-1.0
 # MSIE 7 and newer should be able to use keepalive
 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
</VirtualHost>

###############################
# Reverse proxy config for CAM1
# @ 192.168.1.100
###############################
<VirtualHost 192.168.1.50:10001>
 ProxyRequests Off
 ProxyPreserveHost On
 ProxyVia On
 <Proxy *>
 Order deny,allow
 Allow from all
 </Proxy>
 ProxyPass / http://192.168.1.100:80/
 ProxyPassReverse / http://192.168.1.100:80/
 CustomLog /path/to/apachelogs/access_cam1.log combined
 ErrorLog /path/to/apachelogs/error_cam1.log
 ServerName cam3

 SSLEngine On
 SSLCertificateFile /path/to/sslcert/mysslcert.crt
 SSLCertificateKeyFile /path/to/sslkey/sslkey.key

 <FilesMatch "\.(cgi|shtml|phtml|php)$">
 SSLOptions +StdEnvVars
 </FilesMatch>
 <Directory /usr/lib/cgi-bin>
 SSLOptions +StdEnvVars
 </Directory>

 BrowserMatch "MSIE [2-6]" \
 nokeepalive ssl-unclean-shutdown \
 downgrade-1.0 force-response-1.0
 # MSIE 7 and newer should be able to use keepalive
 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
</VirtualHost>

###############################
# Reverse proxy config for CAM2
# @ 192.168.1.101
###############################
<VirtualHost 192.168.1.50:10002>
 ProxyRequests Off
 ProxyPreserveHost On
 ProxyVia On
 <Proxy *>
 Order deny,allow
 Allow from all
 </Proxy>
 ProxyPass / http://192.168.1.101:80/
 ProxyPassReverse / http://192.168.1.101:80/
 CustomLog /path/to/apachelogs/access_cam2.log combined
 ErrorLog /path/to/apachelogs/error_cam2.log
 ServerName cam3

 SSLEngine On
 SSLCertificateFile /path/to/sslcert/mysslcert.crt
 SSLCertificateKeyFile /path/to/sslkey/sslkey.key

 <FilesMatch "\.(cgi|shtml|phtml|php)$">
 SSLOptions +StdEnvVars
 </FilesMatch>
 <Directory /usr/lib/cgi-bin>
 SSLOptions +StdEnvVars
 </Directory>

 BrowserMatch "MSIE [2-6]" \
 nokeepalive ssl-unclean-shutdown \
 downgrade-1.0 force-response-1.0
 # MSIE 7 and newer should be able to use keepalive
 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
</VirtualHost>
  • Ok, so before you start up apache, you need to generate your own self-signed SSL certificate/key. See those lines above in the configuration for “SSLCertificateFile” and “SSLCertificateKeyFile”? You will need to generate your own SSL private key, certificate request, and then self sign it. The results of those openssl commands yield files that you point to on your local proxy server. You can read here for an example on how to generate the necessary files
  • Next ensure the router that sits in front of your proxy server @ 192.168.1.50 has port forwarding enabled and forwards traffic going to port 10000, 10001 and 10002 to your proxy server.
  • Start up apache, work out the kinks and you should be ready to go. If you are outside of your normal network you will need to find your router’s WAN public IP address and go to https://my.external.router.ip:10001 and https://my.external.router.ip:10002 and you will be accessing CAM1 and CAM2 respectively over SSL from anywhere in the world. OR secondarily you can also go to https://my.external.router.ip:10000/cam1/ and https://my.external.router.ip:10000/cam2/ to hit the cameras. Please note that traffic from your browser to your proxy server is encrypted with SSL, however the SSL encryption will terminate at the proxy server. Network traffic from your proxy server to CAM1 and CAM2 is unencrypted but only running over your local network. This article assumes you trust who is on your local network not to be sniffing packets.
  • You will also want to ensure that your proxy server has a firewall on it, permits IP forwarding, limits access to only the necessary ports and is configured securely. You can handle that on your own and that is outside of the scope of this article.
  • Hopefully this helps someone out there who wants to securely access their IP cameras over the internet. Note that what is described above should work with any IP camera on the market that only supports HTTP, however the general procedure described above was only tested to work with Foscam model FI8910W

SOFTWARE FOR VIEWING YOUR PROXIED CAMERAS

I’ve received many questions regarding which apps out there support talking to Foscam’s behind a SSL secured proxy and unfortunately the few I’ve used all fall short in one way or another. Proxying http/https based resources on a network (via ports, sub-paths or other methods) is a technology that has been around for a long time and from an client apps perspective it need not know it is even there. Secondarily, the Foscam camera APIs will work just fine regardless of how they are proxied (from the root url or off of a sub-path in the proxy). Regardless here are some apps I’ve used below with some notes

  • iOS: FoscamPro: Cool app, works great when you are on your internal network, but fails miserably if you try to use it from outside your network when your cameras are behind an SSL secured proxy as described above. Why? The FoscamPro application simply DOES NOT support SSL. (FoscamPro devs: PLEASE IMPLEMENT THIS!) The only way to use FoscamPro in the setup above is if you have a VPN server running behind your router; you then connect to your home VPN which lets you appear “internal” to your local network when you are outside of your network, and then access your cameras directly, bypassing the proxy. The VPN itself is what is encrypting all of your communications.
  • iOS: Live Cams Pro: Cool app, works very similar to FoscamPro but supports other manufacturers and more devices, generic url streams etc. They DO support SSL which works with the proxied setup described above. However they DO NOT support specifying a relative path off of the base IP that you are connecting to a Foscam camera with. This effectively eliminates your ability to proxy your cameras via sub-dirs (i.e. https://my.net/cam1/) which is CRITICAL if you have a lot of cams but your router limits the number of port forwards you can have! (Live Cams Pro devs: PLEASE IMPLEMENT THIS!)
  • Android: tinyCam Monitor PRO: Cool app, I must admit I am pretty sure this supports HTTPS as I was testing with this earlier this summer for the port-> cam based config. I have not tested with the sub-dir path setup. If someone can shoot me an update on this I’ll appreciate it. (I’ve switched to all iOS)
Tagged , , , , ,

JBoss and BouncyCastleProvider – SecurityException : “cannot authenticate the provider”

Are you having problems trying to use the BouncyCastleProvider from your app on Jboss 5.x + (i.e. like the errors listed below)? If so and you don’t want to spend hours trying to workaround this issue in Jboss, just follow this guy’s instructions and get back to business: http://www.randombugs.com/java/javalangsecurityexception-jce-authenticate-provider-bc.html

Some background regarding this Jboss issue at https://issues.jboss.org/browse/AS7-308

Errors that can be fixed by doing the above:

java.lang.SecurityException: JCE cannot authenticate the provider BC: org.jasypt.exceptions.EncryptionInitializationException: java.lang.SecurityException: JCE cannot authenticate the provider BC

OR

Caused by: java.util.jar.JarException: Cannot parse vfs: /path/to/your/bouncycastle.jar

 

Tagged , , ,

Astyanax -> Cassandra PoolTimeoutException during Authentication failure?

Recently I was working on implementing a custom IAuthenticator and IAuthority for Cassandra 1.1.1 because really there is not much/any security out of the box. For those of you familiar with Cassandra, its distribution used to include a simple property file based implementation of the IAuthentication and IAuthority that you could reference in your cassandra.yaml file however they removed them from the main distribution and placed them under the examples/ section due to weak security concerns. They are a decent starting point to reference when building your own implementations however they are not recommended for real production use; hence why I started to implement my own.

Doing this, I came across a situation trying to use the Netflix Astyanax client API to talk to Cassandra, while Cassandra was running with th custom IAuthenticator and IAuthorities that I made. When testing the initializations of connections to Cassandra, while specifying invalid credentials (intentionally) instead of seeing some sort of AuthenticationException dumped to my client Astyanax log file, I was getting “PoolTimeoutException“s instead…. which was odd. I scratched my head on this for a while as I cloned Astyanax from GitHub and began digging into the source. I suspected that the Thrift AuthenticationException might  be supressed somewhere…. well after reading the source, I realized it wasn’t being suppressed per-say, but rather sent to Astyanax’s ConnectionPoolMonitor, which is something you can configure programatically when you are defining your client code’s AstyanaxContext object which manages all connectivity to Cassandra. Out of the box Astyanax ships with a few ConnectionPoolMonitor implementations, one is the CountingConnectionPoolMonitor (does no logging, just collects stats) and the second is the Slf4jConnectionPoolMonitorImpl (logs to SLF4J). Depending on which one you specify in your context’s configuration you may or may not see AuthenticationException information in your client’s logs/console.

In my case, I was specifying the CountingConnectionPoolMonitor which was receiving the AuthenticationException, but not doing anything with it other than incrementing some counter, effectivly hiding it from me. The pool ran out of connections (could not create any) and the code waiting on getting a connection just threw a PoolTimeoutException, adding to my confusion.

To correct this, as I was using Log4J, I just created a custom ConnectionPoolMonitor which logged everything to Log4J instead. (@see Astyanax’s SLF4J monitor implementation as an example for how to create one for Log4j) See below for how to specify the monitor. Creating your own ConnectionPoolMonitor implementation is easy and pretty self explanatory.

Below is an example of setting up an AstyanaxContext and how you specify the ConnectionPoolMonitor that should be used. Once I used the correct monitor for my needs, I was able to see the true source of the PoolTimeoutExceptions (i.e. the AuthenticationExceptions) because now my monitor was logging them. (NOTE: the example below is just a test context, not something for a robust setup)


AstyanaxContext context = new AstyanaxContext.Builder()
 .forCluster(clusterName)
 .forKeyspace(keyspaceName)
 .withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
 .setDiscoveryType(NodeDiscoveryType.NONE)
 )

.withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl(clusterName+"-"+keyspaceName+"_CONN_POOL")
 .setPort(defaultConnectionPoolHostPort)
 .setInitConnsPerHost(1)
 .setMaxConnsPerHost(2)
 .setSeeds(connectionPoolSeedHosts)
 .setAuthenticationCredentials(
 new SimpleAuthenticationCredentials(new String(principal), new String(credentials)))
 )
 .withConnectionPoolMonitor(new Log4jConnPoolMonitor())
 .buildKeyspace(ThriftFamilyFactory.getInstance());

context.start();

Tagged , ,
Follow

Get every new post delivered to your Inbox.

Join 26 other followers