Monthly Archives: January 2016

top 5 new features from Amazon Web Services

Amazon Web Services (AWS) are increasingly taking an edge over other web services. This is due to their consistency in conveying new AWS features. All these new features invented are customer oriented innovations. They tend to deliver value, save money and enhance easy usage of the “Web of Services”.

Here are five of the new features AWS has updated.

  1. AWS WorkSpaces enhancements

Amazon WorkSpace has about three updated new features all geared to make the web service more interesting.

  • Audio-In – Your WorkSpace has been improved in that you can be able to make and receive calls using the common communication tools such as Skype, Lync and WebEx.
  • Saved Registration Codes– It is quite easy now to save several registration codes in one particular client application.
  • High DPI Device Support- Now you can automatically scale the in-session practice of Workspace to look like your local DPI settings. The reason for this is to support the increasing acceptance of high DPI (Ultra HD, QHD+ and Full HD) displays.

 

  1. AWS CodePipeline now supports Lambda 

Software release pipelines that are modeled in AWS CodePipeline can now be invoked with AWS Lambda functions. This will help you to specify activities in your pipeline’s stages that can generate functions stated by your code. This allows you to customize your software release pipeline.

Codepipeline can be defined as a steady delivery service that tests, builds, and arrays your code every time a code is changed, centered on the release procedure models you state. With Lambda, you can run a code without managing or provisioning servers. What you are only required is to upload your code and Lambda will take care of everything needed to run your code.

  1. AWS CloudFormation adds Override for Rollbacks 

Even if the rollback has failed, it is now possible to instruct AWS CloudFormation to continue rolling back an update to your stack. Initially, this action could not be carried out hence one was required to ask help from the customer support.

Some of the factors that lead to failed rollback include insufficient permissions, resources that have not stabilized, limitation errors, or changing a resource in your stack outside of CloudFormation.

  1. AWS IoT added features

The AWS IoT Device Gateway has the ability to now support MQTT over WebSockets. Actual                                    mobile users and web applications that interact over WebSockets can easily measure to millions of simultaneous users. WebSockets can be utilized together with Amazon Cognito in order to verify all end-users to your devices.

AWS has also included support for custom keepalive intervals. You can easily specify the intervals with which every connection must be kept open if there are no messages received, but this is for apps and devices that use open connections to AWS IoT.

Lastly, the AWS IoT console has been enhanced making the process to start even quicker. The console can now be used to publish and subscribe to MQTT messages without the help of a physical device or MQTT client. The console can still be utilized to configure logging of your AWS IoT action to CloudWatch Logs.

  1. AWS new Web Application Firewall functionality

It is easy to configure AWS WAF to allow, monitor or block requests based on the records in HTTP request bodies. This segment of a request contains any additional data that you may desire to send to your web server inform of HTTP request body.

It is also possible to set size constraints on specific parts of the requests which allow AWS WAF to permit, block, or count web requests based on the extents of the requests such as URIs, strings, query, or request body.

What makes AWS lead amongst its competitors?

In the year 2015 we had Magic Quadrant for Cloud Infrastructure place Amazon Web Services in the “Leaders” Quadrant. They went further and rated AWS as an industry that has fulfilled its vision and has the highest ability to execute ideas.

The secret behind this is in their role to maintain their position in cloud with a faster rate of innovation, increasing customer and partner environment and a goal to efficiently operate at a massive measure.

They have worked closely with huge industries ranging from Siemens to Nike, Conde Nast to Intuit with the aim of assisting them transform their business impressively.

Amazon Operating Income

The first half of 2015 Amazon Web Services recorded a 19% operating income margin profile. This was high compared to Amazons domestic and International amounts of 4.5% and -0.6% respectively.

With these kinds of profit outlines, Amazon only needs to increase its AWS division to $5.83 billion within half-year to make a run rate of $11.7 billion yearly. With the same clip of improvement, retailing could become an essential business to AWS—from a financial standpoint.

AWS is a very significant business for Amazon. It has proved to be very lucrative and with the current pace of innovation and improvement, Amazon will continue to pose a big challenge to its competitors.

AWS is now at $ 10 billion run rate

Short update from Amazon’s Q4

Amazon’s cloud division AWS continues to grow, impressing analysts since Amazon first started breaking out results last spring.

AWS did $2.4 billion revenue in Q4, up from $2.1 billion in Q3.

2015 proved to be a big year for AWS in general as it rolled out:

  • 722 new services and features over the course of the year — a 40 percent increase from 2014.
  • AWS has expanded to 32 Availability Zones in 12 regions
  • Plans to add 5 regions
  • 11 additional Availability Zones are scheduled

chart

source: http://www.zdnet.com/article/amazon-q4-2015-earnings-revenue-cloud-prime/ 

All AWS related quotes from the press release about Q4 on Amazon investor relations page:

  • Amazon Web Services (AWS) announced the launch of its Asia Pacific (Seoul) Region in Korea and its plans to open a new region in Canada. The AWS Cloud is now available from 32 Availability Zones across 12 geographic regions worldwide, with another five AWS Regions (and 11 Availability Zones) in Canada, China, India, Ohio, and the U.K. expected to be available in the coming year.
  • AWS announced the general availability of Amazon WorkMail, a secure, managed business email and calendaring service with support for existing desktop and mobile email clients.
  • AWS announced the general availability of AWS IoT, a managed cloud platform that lets billions of connected devices — such as mobile phones, cars, factory floors, aircraft engines, sensor grids, and more — easily and securely interact with cloud applications and other devices. AWS IoT can support trillions of messages, and can process, route, and keep track of those messages to AWS endpoints and other devices reliably and securely, even when the devices aren’t connected.
  • AWS announced AWS Certificate Manager (ACM), a new service that enables customers to easily provision, manage, and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet. Certificates, which typically cost between $45 and $499, are provided to AWS customers free of charge through ACM and are verified by Amazon’s certificate authority, Amazon Trust Services.
  • AWS launched EC2 Scheduled Reserved Instances, allowing customers to reserve capacity for their applications that run on a part-time, recurring basis with a daily, weekly, or monthly schedule over the course of a one-year term.
  • AWS announced 722 significant new services and features in 2015, a 40% increase over 2014.

the history of AWS CodeDeploy

Must read!

The background story about Apollo aka AWS CodeDeploy:

“The Story of Apollo – Amazon’s Deployment Engine”,  written by Amazon CTO Werner Vogels on his blog: http://www.allthingsdistributed.com/2014/11/apollo-amazon-deployment-engine.html 

Deploying software to a single host is easy. You can SSH into a machine, run a script, get the result, and you’re done. The Amazon production environment, however, is more complex than that. Amazon web applications and web services run across large fleets of hosts spanning multiple data centers. The applications cannot afford any downtime, planned or otherwise. An automated deployment system needs to carefully sequence a software update across a fleet while it is actively receiving traffic. The system also requires the built-in logic to correctly respond to the many potential failure cases.

CodeDeploy allows you to plug-in your existing application setup logic, and then configure the desired deployment strategy across your fleets of EC2 instances. CodeDeploy will take care of orchestrating the fleet rollout, monitoring the status, and giving you a clear dashboard to control and track all of your deployments. It simplifies and standardizes your software release process so that developers can focus on what they do best –building new features for their customers. 

Never worked with CodeDeploy before?

Start with this 5 minute video. It will show a sample CodeDeploy deployment (flat html files hosted on S3 deployed to Apache web servers on EC2).

 

 

AWS block storage performance compared

(this post does only covers EBS, not S3 (object based storage)

We have three storage type on Amazon’s AWS platform for EC2 virtual machines:

  1. General Purpose SSD
  2. Provisioned IOPS
  3. magnetic disk

Magnetic volumes

These provide the lowest cost per GB of all EBS volume types. Magnetic volumes are backed by magnetic drives and are ideal for workloads where data is accessed infrequently, and scenarios where the lowest storage cost is important. Magnetic volumes provide 100 IOPS on average, but can burst to hundreds of IOPS.

Head and platters detail of a hard disk drive Seagate Medalist ST33232A
Head and platters detail of a hard disk drive Seagate Medalist ST33232A

General Purpose SSD

General Purpose (SSD) volumes are the default EBS volume type for Amazon EC2 instances. General Purpose (SSD) volumes are backed by Solid-State Drives (SSDs) and are suitable for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes. General Purpose (SSD) volumes are designed to offer single digit millisecond latencies, deliver a consistent baseline performance of 3 IOPS/GB to a maximum of 10,000 IOPS, and provide up to 160 MBps of throughput per volume. General Purpose SSD volumes smaller than 1 TB can also burst up to 3,000 IOPS. I/O is included in the price, so you pay only for the capacity.

Samsung-SSD-SM825-PCB-Top
Samsung-SSD-SM825-PCB-Top

Provisioned IOPS

Provisioned IOPS volumes – backed by Solid-State Drives (SSDs) – are suitable for applications with I/O-intensive workloads such as databases.

Provisioned IOPS volumes are designed to offer single digit millisecond latencies, deliver a consistent baseline performance of up to 30 IOPS/GB to a maximum burst capacity of 20,000 IOPS, and provide up to 320 MBps of throughput per volume. Additionally, you can stripe multiple volumes together to achieve up to 48,000 IOPS or 800MBps when attached to larger EC2 instances.

To maximize the benefit of Provisioned IOPS volumes,  EBS-optimized EC2 instances are recommended. With EBS-optimized instances, Provisioned IOPS volumes can achieve single-digit millisecond latencies and are designed to deliver the provisioned performance 99.9% of the time.

Compared to some HDD’s and SSD’s

  • a 6TB SATA HDD can deliver about 100 IOPS                 =    0.016 IOPS/GB
  • a 600GB SAS HDD typically offers 160 IOPS                    =    0.26 IOPS/GB
  • a 100GB SATA HDD typically offers around 100 IOPS      =     1 IOPS/GB  < non existent disk used for reference
  • a 512GB SATA SSD can deliver 84.000 IOPS                    =    164 IOPS/GB
  • a 512GB 12GB/s SAS SSD can deliver 120.000 IOPS    =     234 IOPS/GB
  • a 400 GB NVMe PCIe SSD can deliver 290.000 IOPS    =     725 IOPS/GB
  • a 256GB 3D Xpoint SAS SSD  461.000                                =     1800 IOPS/GB

StorageReview-Intel-Optane

Customers often use capacity as a metric when comparing storage solutions. Gamers tend to focus and benchmark on throughput (MB/ps) while this is hardly relevant for gamers nor servers and storage systems because random IO is typically the bottleneck. The most important metric is the amount of random read or write operations per second. That’s why storage and virtualisation specialists tend to talk about IOPS.

(of course not all IO’s are created equal, but that’s another topic)

An even better metric to compare storage solutions or storage media would be the amount of IOPS per GB, so we calculated the IOPS per GB metric for every disk type.

How not to: setup Amazon WorkMail & WorkDocs

Tried to setup Workman & WorkDocs after viewing this 30 minute video:

Didn’t want to use an existing Active Directory server  and went for Simple AD:

Simple AD is a Microsoft Active Directory–compatible directory from AWS Directory Service powered by Samba 4 (developed with Microsoft’s assistance).

Simple AD supports commonly Active Directory features like user accounts, group memberships, domain-joining Amazon Elastic Compute Cloud (Amazon EC2) instances running Linux & Microsoft Windows, Kerberos-based single sign-on (SSO), and group policies. This makes it even easier to manage Amazon EC2 instances running Linux and Windows, and deploy Windows applications on AWS.

WorkMail is really easy to setup if you can change the DNS settings of your domain yourself. Since WorkMail isn’t available in all AWS regions – you should note the following:

A Simple AD running on AWS only works within a region. So make sure you setup WorkMail & WorkDocs in the same region – in order to use the same public URL and the same user accounts.

DNS setup for WorkMail

CloudSceptic.nl : an example of a DNS setup for WorkMail

HowTo: migrate your DNS hosting to Route 53

Today we have migrated the DNS hosting of the domain http://nimbusarchitect.us to Amazon AWS Route 53. It’s easy, let’s have a look at the process.

For several services of AWS, you have to choose a region . You don’t for Route 53, so it’s a global service.

Screen Shot 2016-01-04 at 16.49.54

AWS allows you to transfer a domain to Route 53. This is the easy way: you don’t have to recreate your records if you use this wizard.

But if you like you can keep your current registrar. We wanted to keep using Transip because they are cheaper as a registrar and it’s practical to have one place to administer all domain names.

Use the following method in case you want to keep using your current registrar:
1. create your zone at Route 53
2. create your records / or import a zone file
3. change your name servers at your registrar (in this example Transip.nl)

DNS zone before the change:

Screen Shot 2016-01-04 at 16.43.37

DNS zone after the change to Route 53:

Screen Shot 2016-01-04 at 16.43.21

Projected costs: $0.50 a month…

Introduction movie Amazon Route 53

8 minute intro movie on Route 53

Amazon Route 53 has a simple web-services interface that lets you get started in minutes. Your DNS records are organized into “hosted zones” that you configure with Route 53’s API. Route 53 provides a simple set of APIs that make it easy to create and manage DNS records for your domains. You can call these directly; all this functionality can also be accessed via the AWS Management Console.

The sheer size of AWS put in perspective

Amazon Web Services did about $7 billion of revenue in 2015. Sounds like a lot, but I can only comprehend big numbers like that in a comparison. So let’s try to put those numbers in perspective:

As I am familiar with technology and other toy companies, let us compare this $7 billion of AWS with some other companies like VMware, NetApp, Avaya and Toys ‘R’ Us :

Screen Shot 2016-01-03 at 09.41.54

revenue per year in  100 million USD

This $7 billion is only a small part of Amazon of course. Mother-ship Amazon’s total revenue is $100 billion, comparable with Microsoft doing $90 billion :Screen Shot 2016-01-02 at 22.44.54                                                          Microsoft in blue & Amazon – last 2 year

To put those numbers in perspective:

The revenues of Amazon and Microsoft are comparable with the GDP of countries like Ecuador, Slovakia and Morocco  (with 16, 5 and 33 million inhabitants).

Gartner estimated recently that Amazon Web Services offers 10 times as much computing capacity as the next 14 players in the market, combined.

(yes 10 x all the other players in the Magic Quadrant including Microsoft and Google)

aws gartner mq

Due to it’s pace of growth AWS is on track to be a $50 billion business by 2020. That’s about the size of Cisco and Coca-Cola.

Screen Shot 2016-01-03 at 10.34.44

Servers

The amount of servers is unknown. What we do know:

  • in 2014 AWS had 1.4 million servers (implies a profit of 3000 USD per server in 2014)
  • Garner estimates AWS to have more then 2 milliion servers
  • AWS servers are spread over 28 zones
  • typical datacenter has over 80.000 servers

So let us conclude:

AWS is utterly massive.

SSH login on EC2 Linux without .pem file?

By default one has to use a .pem file to SSH into a Amazon linux instance. This is a pretty good idea, and safer than a password. But sometimes it’s more practical to use a username and password.  You still can, this is how:

ssh-add-ec2V2-620x264

Add your downloaded .pem file to you ssh store on Linux and Unix systems like OS X:

ssh-add /path/to/pemfile.pem

Login without .pem file? Follow these steps:

  1. login using your .pem file (ssh -l pemfilename.pem ubuntu@publicip (or instancename.availabilityzone.compute.amazonaws.com)
  2. Create a new user to be used to login with a password (sudo useradd -s /bin/bash -m -d /home/adminbert -g root adminbert)
  3. Set a strong password (sudo passwd adminbert)
  4. configure SSH by editing the config file. Change this PasswordAuthentication from no to yes (sudo nano /etc/ssh/sshd_config)
  5. Restart SSH (sudo service ssh restart)

You can now login using a username and password.

(ssh username@publicipaddress / or username@instancename.availabilityzone.compute.amazonaws.com))

AWS Solutions Architect – Education options

No time to read? Recommended training is acloud.guru – use qwikLABS as backup

update: When you apply for AWS Activate you get 80 credits for qwiklab.com Self-paced labs ($80 value). Go to http://aws.amazon.com/activate/ 

There are a lot of options to educate oneself these days. This post will focus on online training options, because I just love CBT’s. I love it because you can set the pace yourself, skip parts you are already familiar with or stop the instructor if you want to Google or try something yourself. Learning on the go is of course another big benefit.

What again?

Capture22

To be clear: I want to pursue two certifications from Amazon:

The AWS Certified Solutions Architect – Associate exam is intended for people with experience designing distributed applications and systems on the AWS platform. Exam concepts you should understand for this exam include:

  • Designing and deploying scalable, highly available, and fault tolerant systems on AWS
  • Lift and shift of an existing on-premises application to AWS
  • Ingress and egress of data to and from AWS
  • Selecting the right AWS service based on data, compute, database, or security requirements
  • Identifying proper use of AWS architectural best practices
  • Estimating AWS costs and identifying cost control mechanisms

and

The AWS Certified Solutions Architect pro – Professional exam validates advanced technical skills and experience in designing distributed applications and systems on the AWS platform. Example concepts you should understand for this exam include:

  • Designing and deploying dynamically scalable, highly available, fault tolerant, and reliable applications on AWS
  • Selecting right AWS services to design and deploy an application based on given requirements
  • Migrating complex, multi-tier applications on AWS
  • Designing and deploying enterprise-wide scalable operations on AWS
  • Implementing cost control strategies

qwikLABS

Amazon recommends qwikLABS so I looked at this first. They use a weird credit system – you buy credits and then pay for every module. The Architect Associate level course will cost you about 76 USD.

They use a weird credit system – you buy credits and then pay for every module. The complete Architect Associate level course will cost you about 76 USD. Not expensive at all – for 5 hours of content. You can also buy only a part for about 10 points.

There is also a lot of free content. Currently they have about 30 free introduction labs.
this is how it works:
– you apply for a lab
– you will get a lab instruction PDF
– then you launch the lab <- in fact a AWS account is created for you
– login to AWS and play around following the PDF instructions

+ What I do like is the hands on experience, and you don’t have to use your own AWS account and credit card.

+recommended by Amazon

-not suitable for learning on the go

update: When you apply for AWS Activate you get 80 credits for qwiklab.com Self-paced labs ($80 value). Go to http://aws.amazon.com/activate/ 

CBT nuggets

Love them, came up in my mind first to look at. They have a lot of content on Amazon web services, but they only cover the Associate level solution architect for now. So I will have to look further, because I want to do both the Associate and Pro level to become a “real nimbus architectus”. CBT Nuggets is an expensive option: at least a 100 USD a month. Not recommended.

Pluralsight

Formerly known as TrainSignal; again a big name in the industry. I happen to have a corporate account so it would be nice if they offer both the Associate and Pro AWS Solution architect training. Pluralsight is not expensive at all (about 30 USD a month for an individual) and they offer about 20 courses on AWS. However they do not offer a course directly meant to do the AWS solution architect certification. Not an option.

CloudAcademy

Found CloudAcademy, never heard of it before. Looks good; they offer the courses I am looking for:

Screen Shot 2016-01-02 at 10.57.35

The courses contain course material, labs and quiz modules. For the Associate level certification 11 labs and 10 video courses with a quiz at the end. They offer subscriptions starting at 25 USD a month. You can get started easily: 7 days for free without looking up your credit card. Looks good to me!

After a few hours you will get bored by these video lessons. The content is there, it’s technically correct but boring. Not sure why. Maybe a lack of personality or the intonation.

acloud.guru

Found acloud.guru via Udemy.com. 11 hours of video, 74 lessons and 230 quiz questions for a fixed price. At the moment of writing there is a discount, the course costs only 27 EURO.Screen Shot 2016-01-09 at 09.38.39

Acloud.Guru also offers the AWS Solutions Architect professional course.
Screen Shot 2016-01-09 at 09.38.48

+study on the go

+good instructor – enthusiasm & personality

+active community

+course is being updated all the time

Conclusion

So nimbusarchitect.us highly recommends acloud.guru

&

recommends qwikLABS if you need some extra material, or when you need guidance with the hands-on labs.

 

Hello world! (an introduction)

update: also see this post: The sheer size of AWS put in perspective

A short introduction

This website is started by Bert van der Lingen (LinkedIn profile) . Currently a pre-sales consultant, solution architect and trainer. Worked in “traditional IT” for 17 years in different roles such as support-, systems- and sales-and lead-engineer, consultant, architect, in “traditional IT”.

In October 2015 we all learned that Amazon’s Cloud (AWS) is 15x bigger then the 2nd player (Microsoft with Azure). Not only the size is staggering, but they are growing very fast. Unlike the expectations of financial analysts they are actually making a lot of profit (17%).

Screen Shot 2016-01-03 at 10.34.44

Gartner estimated recently that Amazon Web Services offers 10 times as much computing capacity as the next 14 players in the market, combined.

Screen Shot 2016-01-02 at 11.22.38

This is a big thing for the IT industry, and a shock for Enterprise IT. Being a long time consultant, architect, engineer and SE in “traditional IT” I think this is very interesting if not scary.

You Will Be Assimilated

Join the force

I plan to go for the AWS “Cloud architect” certifications. So have set up this weblog to post something about these subjects once in a while. Expect short how to’s, tutorials, tips and things I have to write down to remember it. Expect opinionated articles. Expect a mess. If we don’t count  search bots as a visitor, I will probably be the only regular visitor myself.

Bert van der Lingen

IT depends.