Fix: HDP “Unauthorized connection for super-user: oozie from IP 127.0.0.1″

Recently have been playing with HortonWorks HDP 2.2. Was starting to configure some oozie workflows and when submitting the job the first step’s Hive script failed with this error and stack.


JA002: Unauthorized connection for super-user: oozie from IP 127.0.0.1

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): Unauthorized connection for super-user: oozie from IP 127.0.0.1
at org.apache.hadoop.ipc.Client.call(Client.java:1468)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy39.getDelegationToken(Unknown Source)
at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getDelegationToken(ApplicationClientProtocolPBClientImpl.java:306)
... 30 more

To fix this, SSH into your HDP instance VM and edit: /etc/hadoop/conf/core-site.xml and change the following config to add “localhost”. Save and restart the relevant services or just reboot your HDP VM instances.


<property>
<name>hadoop.proxyuser.oozie.hosts</name>
<value>sandbox.hortonworks.com,127.0.0.1,localhost</value>
</property>

Execute Powershell commands via Node.js, REST, AngularJS

Building on my last post on stateful-process-command-executor this post will cover how you can leverage that node.js module to expose the capabilities of Powershell cmdlets over a REST api presented through an AngularJS interface.  Why would one want to do this you ask? Well I’ve covered this in my last post but I will briefly explain it here.

(Note, what is described below could just as easily be built for Bash processes as well as the underlying module supports it)

The use case came out of the need to automate certain calls to manage various objects within Microsoft o365’s environment. Unfortunately Microsoft’s GraphAPI, does not expose all of the functionality that is available via its suite of various Powershell cmdlets for o365 services. Secondly when you need to do these operations via Powershell, its requires a per-established remote PSSession to o365…. and establishing (and tearing down) a new remote PSSession whenever you need to invoke a cmdlet against a remote resource (remote server, or o365 endpoint) is expensive. Lastly, who wants to actually sit there and manually run these commands when they could be automated and invoked on demand via other means… such as via a web-service etc. Hence this is how stateful-process-command-proxy came to be… it provides the building block bridge between node.js and a pool of pre-established Powershell consoles. Once you have node.js talking to stateful-process-command-proxy, you can build whatever you want on top of that in node.js to mediate the calls.

Layer one

The first higher level NPM module that builds on stateful-process-command-proxy is powershell-command-executor

What this adds on top of stateful-process-command-proxy is probably best described by this diagram:

 

So the main thing to understand is that the module provides the PSCommandService class which takes a registry of pre-defined “named” commands and respective permissible arguments. The registry is nothing more than a object full of configuration and is easy to define. You can see an example here in the project which defines a bunch of named “commands” and their arguments usable for o365 to manipulate users, groups etc.  PSCommandService is intended to serve as a decoupling point between the caller and the StatefulProcessCommandProxy… in other words a place where you can restrict and limit the types of commands, and arguments (sanitized) that can ever reach the Powershell processes that are pooled within StatefulProcessCommandProxy.

It is PSCommandService‘s responsibility to lookup the named command you want to execute, sanitize the arguments and generate a literal Powershell command string that is then sent to the StatefulProcessCommandProxy to be execute. StatefulProcessCommandProxy, once the command is received is responsible for checking that the command passes its command whitelist and blacklist before executing it. The sample o365Utils.js config file provides a set of pre-canned (usable) examples of init/destroy commands, auto-invalidation commands and whitelist/blacklist configs that you can use when constructing the StatefulProcessCommandProxy that the PSCommandService will use internally.

Layer two

The next logic step is to expose some sort of access to invoking these pre-canned “commands” to callers. One way to do this is via exposing it via a web-service.

WARNING: doing such a thing, without much thought can expose you to serious security risks. You need to really think about how you will secure access to this layer, the types of commands you expose, your argument sanitiziation and filtering of permissible commands via whitelists and blacklists etc for injection protection. Another precaution you may want to take is running it only on Localhost for experimental purposes only. READ OWASPs article on command injection.

Ok with that obvious warning out of the way here is the next little example project which provides this kind of layer that builds on top of the latter: powershell-command-executor-ui

This project is a simple Node.js ExpressJS app that provides a simple set of REST services that allows the caller to:

  • get all available named commands in the PSCommandService registry
  • get an individual command configuration from the registry
  • generate a command from a set of arguments
  • execute the command via a set of arguments and get the result
  • obtain the “status” of the underlying StatefulProcessCommandProxy and its history of commands

Given the above set of services, one can easily build a user-interface which dynamically lets the user invoke any command in the registry and see the results… and this is exactly what this project does via an AngularJS interface (albeit a bit crude…). See diagrams below.

Hopefully this will be useful to others out there, enjoy.

 

 

 

Encrypting and storing powershell credentials

Please see: https://github.com/bitsofinfo/powershell-credential-encryption-tools

Recently I had the need to store some credentials for a powershell script (i.e. credentials that I ultimately needed in a PSCredential object). The other requirement is that these credentials be portable and “user” independent, meaning that they could not be encrypted using the DPAPI (windows data protection api) as that binds the “secret” used for the encryption to the currently logged in user (which reduces your portability and usage of these encrypted credentials). The way to avoid this is to specify the secret key parameters in the ConvertTo-SecureString and ConvertFrom-SecureString commands which will force it to use AES (strength determined by your key size)

I ended up coding a few powershell scripts that assist with the creation of a JSON AES-256 encrypted credentials file + secret key, as well as functions you can include in other powershell scripts to load these credentials into usable formats such as PSCredentials, SecureStrings etc.

Please see: https://github.com/bitsofinfo/powershell-credential-encryption-tools

NOTE! The most important thing about using the output from this tool, is properly locking down (i.e. file permissions) the secret key!

The format of the resulting file looks something like this:

{ "username" : "AESEncryptedValue", "password": "AESEncryptedValue" }

Executing stateful shell commands with Node.js – powershell, bash etc

Hoping this will be useful for others out there, I’ve posted some code that could to serve as a lower level component/building block in a node.js application who has a need to mediate interaction with command line programs on the back-end. (i.e. bash shells, powershell etc.)

The project is on github @ stateful-process-command-proxy and also available as an NPM module

This is node.js module for executing os commands against a pool of stateful child processes such as bash or powershell via stdout and stderr streams. It is important to note, that despite the use-case described below for this project’s origination, this node module can be used for proxying long-lived bash process (or any shell really) in addition to powershell etc. It works and has been tested on both *nix, osx and windows hosts running the latest version of node.

This project originated out of the need to execute various Powershell commands (at fairly high volume and frequency) against services within Office365/Azure bridged via a custom node.js implemented REST API; this was due to the lack of certain features in the REST GraphAPI for Azure/o365, that are available only in Powershell (and can maintain persistent connections over remote sessions)

If you have done any work with Powershell and o365, then you know that there is considerable overhead in both establishing a remote session and importing and downloading various needed cmdlets. This is an expensive operation and there is a lot of value in being able to keep this remote session open for longer periods of time rather than repeating this entire process for every single command that needs to be executed and then tearing everything down.

Simply doing an child_process.exec per command to launch an external process, run the command, and then killing the process is not really an option under such scenarios, as it is expensive and very singular in nature; no state can be maintained if need be. We also tried using edge.js with powershell and this simply would not work with o365 exchange commands and heavy session cmdlet imports (the entire node.js process would crash). Using this module gives you full un-fettered access to the externally connected child_process, with no restrictions other than what uid/gid (permissions) the spawned process is running under (which you really have to consider from security standpoint!)

The diagram below should conceptually give you an idea of what this module does: process pooling, custom init/destroy commands, process auto-invalidation configuration and command history retention etc. See here for full details: https://github.com/bitsofinfo/stateful-process-command-proxy

Obviously this module can expose you to some insecure situations depending on how you use it… you are providing a gateway to an external process via Node on your host os! (likely a shell in most use-cases). Here are some tips; ultimately its your responsibility to secure your system.

  • Ensure that the node process is running as a user with very limited rights
  • Make use of the uid/gid configuration appropriately to further limit the processes
  • Never expose calls to this module directly, instead you should write a wrapper layer around StatefulProcessCommandProxy that protects, analyzes and sanitizes external input that can materialize in a command statement.
  • All commands you pass to execute should be sanitized to protect from injection attacks
  • Make use of the whitelist and blacklist command features of this module
  • WRAP this service via additional code that sanitizes all arguments to protect from command injection

Hopefully this will help others out there who have a similar need: https://github.com/bitsofinfo/stateful-process-command-proxy

Configuring PowerShell for Azure AD and o365 Exchange management

Ahhh, love it! So you need to configure a Windows box to be able to utilize DOS, sorry PowerShell, to remotely manage your Azure AD / o365 / Exchange online services via “cmdlets”. You do some searching online and come across a ton of seemingly loosely connected Technet articles, forum questions etc.

Well I hope to summarize it up for you in this single blog post and I’ll try to keep it short without a lot of “why this needs to be done” explanations. You can read up on that on your own w/ the reference links below.

#1: The first thing we need to do is setup a separate user account that we will use when connecting via PowerShell to the remote services we want to manage with it:

  1. Using an account with administrative privileges, login to your Azure account/tenant at https://manage.windowsazure.com
  2. Once logged in click on “Active Directory” and select the instance you want to add the new user account too
  3. Click on “Add user”, fill out the details. Be sure to select “Global Administrator” as the role (or a lesser one, if need be depending on what you will be doing with PowerShell)
  4. Click create and it will generate a temporary password and email it to that user + the user listed for the secondary email that you filled out
  5. Logout of the Azure management portal
  6. Login again at https://manage.windowsazure.com, however this time login as the new user you just created with the temporary password. Once logged in, reset the password to a better one, click next.
  7. You should now be logged in as the new user you just created and on the main Azure management dashboard screen
  8. Find the link for managing “Exchange” and click on it
  9. You will now be redirected to the o365 Exchange admin center
  10. Click on “Permissions”, you will now see a bunch of groups/roles, the one we care about is Organization Management.
  11. Highlight the “Organization Management” role/group and ensure that the user you are logged in as (the new user you just created) is a member of this group directly or indirectly. You need to be a member of this group in order to get the “Remote Shell” permission that lets you download the Exchange cmdlets and manage exchange remotely via PowerShell. (See here for info on this group and the Remote Shell permission)

#2: Now that our special admin user is created with all the needed permissions, we can now get our PowerShell environment ready:

  1. Get on the Windows box that you intend to run the PowerShell commands from
  2. Download and install the “Microsoft Online Services Sign-In Assistant for IT Professionals” (its ok even if you are not a “professional”)
  3. Its 2014… you need to reboot after the last step…
  4. Download and install the “Azure AD Module for Windows PowerShell 64 bit”

#3: Ok, lets verify basic Azure AD PowerShell cmdlet capabilities

  1. Now on your Desktop RIGHT click on “Windows Azure Active Directory Module for Windows PowerShell” and “Run as Administrator”
  2. In PowerShell run this command “Set-ExecutionPolicy Unrestricted”
  3. In PowerShell run this command “Connect-MsolService” a nice dialog will prompt you for your credentials (use the creds that you setup above)
  4. In PowerShell run this command “Get-Msoluser”, get data back?? Great you are good to go for basic connectivity

#4: Finally…. lets verify o365 Exchange PowerShell cmdlet capabilities

  1. In the same PowerShell as you started above…
  2. Type: “$UserCredential = Get-Credential”… again enter your user credentials
  3. Type:
    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection
    
  4. Type: “Import-PSSession $Session”
  5. At this point you should see some activity at the top of your PowerShell window as 300+ Exchange online cmdlets are downloaded to your system for use
  6. Quickly verify the Exchange Online Remote Shell permission with: “Get-User YOUR_UPN | Format-List RemotePowerShellEnabled”
  7. You should get back “RemotePowerShellEnabled: true”

DONE, proceed to the next quagmire…

 

REFERENCE LINKS:

Managing Azure AD Using PowerShell:
http://technet.microsoft.com/en-us/library/jj151815.aspx

o365 Exchange online: Remote Shell Permission and Organization Management
http://technet.microsoft.com/en-us/library/dd638114(v=exchg.150).aspx

Connect to Exchange Online using Remote PowerShell:
http://technet.microsoft.com/en-us/library/jj984289(v=exchg.150).aspx

Series: Using remote PowerShell to manage o365
http://o365info.com/using-remote-powershell-to-manage_212/

Copying lots of files into S3 (and within S3) using s3-bucket-loader

Recently a project I’ve been working on had the following requirements for a file-set containing roughly a million files varying in individual size from one byte to over a gigabyte; and the file-set size in total being sized between 500gb and one terabyte

  1. Store this file-set on Amazon S3
  2. Make this file-set accessible to applications via the filesystem; i.e. access should look no different then any other directory structure locally on the Linux filesystem
  3. Changes on nodeA in regionA’s data-center should be available/reflected on nodeN in regionN’s data-center
  4. The available window to import this large file-set into S3 would be under 36 hours (due to the upgrade window for the calling application)
  5. The S3 bucket will need to be backed up at a minimum every 24 hours (to another bucket in S3)
  6. The application that will use all of the above generally treats the files as immutable and they are only progressively added and not modified.

If you are having to deal w/ a similar problem perhaps this post will help you out. Let go through each item.

Make this file-set accessible to applications via the filesystem; i.e. access should look no different then any other directory structure locally on the Linux filesystem. Changes on node-A in region-A’s data-center should be available/reflected on node-N in region-N’s data-center.

So here you are going to need an abstraction that can present the S3 bucket as a local directory structure; conceptually similar to an NFS mount. Any changes made to the directory structure should be reflected on all other nodes that mount the same set of files in S3. Now there are several different kinds of S3 file-system abstractions and they generally fall into one of three categories (block based, 1 to 1, and native), the type has big implications for if the filesystem can be distributed or not. This webpage (albeit outdated) gives a good overview that explains the different types.  After researching a few of these we settled on attempting to use YAS3FS (yet another, S3 filesystem). YAS3FS, written in Python, presents an S3 bucket via a local FUSE mount; what YAS3fs adds above other S3 filesystems is that it can be “aware” of events that occur on other YA3FS nodes who mount the same bucket, and can be notified of changes via SNS/SQS messages. YAS3FS keeps a local cache on disk, so that it gives the benefits (up to a point) of local access and can act like a CDN for the files on S3. Note that FUSE based filesystems are slow and limited to a block size (IF the caller will utilize it) of 131072. YAS3FS itself works pretty good, however we are *still* in evaluation process as we work through many issues that are creeping up in our beta-environment, the big ones being unicode support and several concurrency issues that keep coming up. Hopefully these will be solvable in the existing code’s architecture…

 

The available window to import this large file-set into S3 would be under 36 hours

Ok no problem, lets just use s3cmd. Well… tried that and it failed miserably. After several crashes and failed attempts we gave up. S3cmd is single-threaded and extremely slow to do anything against a large file-set, much less load it completely into S3. I also looked at other tools, (like s4cmd which is multi-threaded), but again, even these other “multi-threaded” tools eventually bogged down and/or became non-responsive against this large file-set.

Next we tried mounting the S3 bucket via YAS3fs and executing rsync’s from the source files to the target S3 mount…. again this “worked” without any crashing, but was single threaded and took forever. We also tried running several rsyncs in parallel, but managing this; and verifying the result, that all files were actually in S3 correctly w/ the correct meta-data, was a challenge. The particular challenge being that YAS3FS returns to rsync/cp immediately after the file is written to the local YAS3FS cache, and then proceeds to push to S3 asynchronously in the background (which makes it more difficult to check for failures).

Give the above issues, it was time to get crazy with this, so I came up with s3-bucket-loader. You can read all about how it works here, but the short of it is that s3-bucket-loader uses massive parallelism via orchestrating many ec2 worker nodes to load (and validate!) millions of files into an S3 bucket (via an s3 filesystem abstraction) much quicker than other tools. Rather than sitting around for days waiting for the copy process to complete with other tools, s3-bucket-loader can do it in a matter of hours (and validate the results). Please check it out for more details, as the github project explains it in more details.

The S3 bucket will need to be backed up at a minimum every 24 hours (to another bucket in S3)

Again, this presents another challenge; at least with copying from bucket to bucket you don’t actually have to move the files around yourself (bytes), and can rely on s3’s key-copy functionality. So again here we looked at s3cmd and s4cmd to do the job, and again they were slow, crashed, or bogged down due to the large file-set. I don’t know how these tools are managing their internal work queue, but it seems to be so large they just crash or slow down to the point where they become in-efficient. At this point you have two options for very fast bucket copying

  1. s3-bucket-loader: I ended up adding key-copy support to the program and it distributes the key-copy operations across ec2 worker nodes. It copies the entire fileset in under an hour, and under 20 minutes with more ec2 nodes.
  2. s3s3mirror: After coding #1 above, I came across s3s3mirror. This program is a multi-threaded, well coded power-house of a program that just “worked” the first time I used it. After contributing SSL, aws-encryption and storage-class support for it, doing a full bucket copy of over 600gb and ~800k s3 objects took only 45 minutes! (running w/ 100 threads). It has good status logging/output and I highly recommend it

Overall for the “copying” bucket to bucket requirement, I really like s33mirror, nice tool.