Resize EC2 Volume (without Resizing Partition)

Problem

You would like to resize a volume attached to an EC2 instance.

Solution

Do the following:

  • Create a snapshot of your volume (instructions)
  • Stop your instance
  • Go to EBS / Volumes and select Actions / Modify Volume

Modify Vol

  • Enter the new size for your volume (note you can only ever make the volume larger) and click on Modify

size

  • Wait for the modification to be complete (this might take a while, like 30 min or so)
  • Start your instance

Now, if everything went well, you should have more space available on the disk for the virtual machine. To confirm this, run:

df -h

You should see the new size of the volume as the size of your main partition:

size2

Notes

  • If the size of your partition, does not match the size of the volume, you probably need to resize your partition (instructions).
  • Resizing the partition is a very painful process, that I think should best be avoided at all costs. I think for this it helps if the EC2 instance attached to the volume is stopped when the resize is performed. Assure that this is the case before you do the resize.
  • If you forgot to stop your instance, and need to do a partition resize, there is a little workaround. Wait for six hours, then resize your volume again (this time while the instance is stopped). Then, it hopefully adjusts your partition size to the correct size.
  • In the above, you might be able to start up your instance even while the new volume is still optimizing. I haven’t tested this though but my guess is that it would work.

 

Set up MySQL Replication with Amazon RDS

Problem

You have an existing server that runs a MySQL database (either on EC2 or not) and you would like to replicate this server with a Amazon RDS MySQL instance.

After you follow the instructions from Amazon, your slave reports the IO status:

Slave_IO_State: Connecting to master

… and the replication does not work.

Solution

AWS provides very good documentation on how to set up the replication: Replication with a MySQL or MariaDB Instance Running External to Amazon RDS.

Follow the steps there but be aware of the following pitfall:

In step 6 `create a user that will be used for replication`: It says you should create a user for the domain ‘mydomain.com’. That will in all likelihood not work. Instead, try to find out the IP address of the Amazon RDS instance that should be the replication slave.

One way to do this is as follows:

  • Create the ‘repl_user’ for the domain ‘%’, e.g.:
CREATE USER 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Also do the grants for this user
GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'repl_user'@'%' IDENTIFIED BY '<password>';
  • Open port 3306 on your server for any IP address.
  • Then the replication should work.
  • Go to your master and run the following command:
SHOW PROCESSLIST;
  • Find the process with the user repl_user and get the IP address from there. This is the IP address for your Amazon RDS slave server.
  • Delete the user ‘repl_user’@’%’ on the master
  • Create the user ‘repl_user’@'[IP address of slave]’ on the master
  • Modify your firewall of your master to only accept connections on port 3306 from the IP address of the slave.
  • Restart replication with
call mysql.rds_stop_replication;
call mysql.rds_start_replication;
  • And check the status with
show slave status\G

The slave IO status should now be “Waiting for master to send event”.

 

 

 

Route 53 Cannot Find CloudFront Distribution

Problem

You have create a CloudFront distribution with a custom domain name (such as yourdomain.com).

Now if you try to link this distribution to your domain using Route 53, you get the following error message:

`No AWS resource exists with the Alias Target that you specified.`

error_message

Solution

Try the following to solve this problem:

  • Make sure that the CNAME you specified for the CloudFront distribution matches your domain name exactly. For instance, if your domain name is http://www.yourdomain.com, make sure that this is also the CNAME.
  • When creating the record set in Route 53, make sure to select the record type `A – IPv4 Address` and not CNAME.

ipv4

 

 

Solving ‘One or more of your origins do not exist’ for Cloud Front

Problem

You are trying to create a CloudFront distribution using Amazon’s API.

You get the error:

“One or more of your origins do not exist”

Solution

In my case, I provided a different value for these two properties:

DistributionConfig.DefaultCacheBehavior.TargetOriginId

and

DistributionConfig.Origins.Items[0].Id

Just make sure that the Id for one of your origins matches the TargetOriginId of the DefaultCacheBehavior and the error should disappear.

 

 

Automatically Make Snapshots for EC2

A quick Google search reveals that there are quite a few different approaches for automatically creating snapshots for EC2 images (such as herehere and here).

All of these are rather difficult to do.

Thankfully, after some more searching around I found a great way to schedule regular snapshots using AWS CloudWatch.

CloudWatch supports a built-in target for ‘Create a snapshot of an EBS volume’:

target

For details of how this can be set up, see the excellent step-by-step instructions on the CloudWatch Documentation.

AWS Lambda: Cross-account pass role is not allowed.

Today I came across the following exception while working with the AWS SDK for Amazon Lambda:

com.amazonaws.AmazonServiceException: Cross-account pass role is not allowed. (Service: AWSLambda; Status Code: 403; Error Code: AccessDeniedException; Request ID: xxx)

At first I was a bit puzzled where this exception might come from; but when I found out what the problem was, it seemed to be pretty obvious:

I tried to upload a lambda function to one AWS account while specifying an execution role that belonged to another AWS account.

So that could easily be fixed by providing a role belonging to the correct account!

Alternatively, this error might also occur when you deploy a lambda function which has references to another cross-account role in the template.yaml file (as mentioned by Steven T in the comments below)

UPDATE

As mentioned in the comments by rjhintz, if you require to use the role from another user, you can do so by modifying the policy for the role as follows:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Principal":{
            "AWS":[
               "arn:aws:iam::123456789012:user/user1",
               "arn:aws:iam::123456789012:user/user2"
            ],
            "Service":"ec2.amazonaws.com"
         },
         "Action":"sts:AssumeRole"
      }
   ]
}

What is Amazon Flourish (for AWS)

According to a recent article on the New Stack Blog, the Amazon Serverless Team (responsible for instance for Amazon Lambda) is about to release a new open source product called ‘Floruish’.

Currently, there are very few details available on this product. These are some points I could find:

  • It will be a platform to manage components of serverless applications.
  • This includes versioning lambda functions and packaging lambda functions with other components such as database dependencies.
  • It will be open source (under Apache license)

As more details become available, I will update this post.

For now, here are some related resources regarding serverless applications with Amazon:

Bulk Change ACL for Amazon S3

Using bucket policies, it is easy to set ACL settings for all new objects that are uploaded to Amazon S3.

However, I wanted to remove ‘public’ read rights for a whole bunch of objects at the same time and such policies do not apply to objects that are already stored on S3.

I found an easy way to change the ACL settings for many objects at the same time. To bulk change, ACL, do the following:

  • Download the free tool CloudBerry Explorer for Amazon S3
  • Install it
  • In the AWS management console, go to Security Credentials
  • Create a new user ‘s3-super’. Save the access and secret key.
  • Assign the role  ‘AmazonS3FullAccess’ to the user

full_access

  • Start CloudBerry Explorer and connect to your S3 with the access and secret key of the s3-super user
  • Now in this tool navigate to the bucket with the objects you would like to change
  • Select one or more objects for which you want to change the ACL settings in the left-hand column.
  • Click on the button ACL Settings

acl

  • In the dialog that pops up, change the settings to what you like and click OK.

acl_settings

The ACL settings for your objects should now be changed.