Lambda Function to Resize EBS Volumes of EMR Nodes

I have to start by saying that you should not use EMR as a persistent Hadoop cluster. The power of EMR lies in its elasticity. You should launch an EMR cluster, process the data, write the data to S3 buckets, and terminate the cluster. However, we see lot of AWS customers use the EMR as a persistent cluster. So I was not surprised when a customer told that they need to resize EBS volume automatically on new core nodes of their EMR cluster. The core nodes are configured to have 200 GB disks, but now they want to have 400 GB disks. It’s not possible to change the instance type or EBS volume configuration of core nodes, so a custom solution was needed for it. I explained to the customer, how to do it with some sample Python code, but at the end they gave up to use this method (thanks God).

I wanted to see if it can done anyway. So for fun and curiosity, I wrote a Lambda function with Java. It should be scheduled to run on every 5 or 10 minutes. On every run, it checks if there’s an ongoing resizing operation. If the resizing is done, it connects to the node and run “growpart” and “xfs_growfs” commands to grow the partition and filesystem. If there’s no resizing operation in progress, it checks all volumes of a specific cluster, and start a resizing operation on a volume which is smaller than a specific size.

Here’s the main class which will be used by Lambda function:

As you can see, it also uses DynamoDB to keep track of modified volumes. I created a table called “resizedvolumes”. Its Primary partition key is defined as “clusterid (Number)”. It should be scheduled to run every 10 minutes (or 5 minutes). In every run, it will check if any volume is resizing or requires repartitioning. If there is no volume requires repartitioning, it will check if any volume is undersized. If there is any volume undersized, it will start resizing and store its information to DynamoDB.

The name of the Lambda Handler function is “com.gokhanatil.volumeresizer.Resizer”. After you build the JAR, you upload to an S3 bucket and create the Lambda function. The lambda function will expect you to define some environment variables. To be able to connect the EMR nodes, it needs to access your private key. You can upload your private key to a S3 bucket and give the bucket name and file name as parameters. You also need to give the name of DynamoDB table. The other required variables are, cluster ID, the AWS region, and the target volume size. I think their names explain what they are used for.

When creating the Lambda function, you need to specify a VPC, a subnet and a security group, so you can configure the security groups of EMR nodes to accept connection from your lambda function. You can select the VPC, subnet and security group used by EMR master node.

As I said, you need to schedule it to run periodically. You can use “CloudWatch Events” trigger, and make it run every 5/10 (whatever you want) minutes.

Here’s the AWS policy for the IAM role which I assigned to my Lambda function:

I hope it helps. If you have any questions about the sample application, let me know. So I can explain them in more details.

Please share

AWS Big Data Specialist. Oracle Certified Professional (OCP) for EBS R12, Oracle 10g and 11g. Co-author of "Expert Oracle Enterprise Manager 12c" book published by Apress. Awarded as Oracle ACE (in 2011) and Oracle ACE Director (in 2016) for the continuous contributions to the Oracle users community. Founding member, and vice president of Turkish Oracle User Group (TROUG). Presented at various international conferences including Oracle Open World.

Leave Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.