Category Archives: Uncategorized

new Snowball announcements: 80TB import & export + extra regions

Amazon AWS announced SnowBall in October 2015:

A storage appliance owned and maintained by AWS, used to import up to 50TB of data into “The Cloud”. You simply order a SnowBall from the AWS Management Console and wait a few days for the appliance to be delivered.

Initially only importing data into AWS was supported, but earlier this year AWS announced support for massive data exports; getting data out of the cloud.

Today AWS announced larger Snowballs, and they are available in more regions:

A new, larger-capacity (80 terabyte) Snowball appliance is now available.

As of today Snowball is available in four new Regions:

  •  AWS GovCloud (US)
  •  US West (Northern California)
  • Europe (Ireland)
  • Asia Pacific (Sydney)

(existing regions are US West (Oregon) and US East (N.Virginia)

Snowball will be available in the remaining AWS Regions in the coming year.

How the Snowball service works:

(obviously managed by a AWS SWF workflow, notifications are send using AWS SNS) 

For more information see the product page ( ) and the announcement blog post from Jeff Bar. Another interesting read is a blog post titled “Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances


Amazon AWS acquires NICE

Amazon Web Services have announced that an agreement has been signed with NICE to acquire the company. NICE is known for High Performance and Technical Computing solutions. NICE deals in comprehensive Grid & Cloud Solutions (already running on AWS), which are their high-in-demand solutions in different sectors such as aerospace, industrial, energy, and utilities.


So far, financial disclosures are not announced but the brand name of NICE will remain as is. There will also be no change in the team as well. Now, the NICE team will work with AWS team to develop and support EngineFrame and Desktop Cloud Visualization products. Both teams will work together to develop better solutions while improving the existing solutions and services.

AWS says that the existing NICE clients do not need to worry about the support and services because NICE team, which is now backed by the AWS support team, will ensure that all existing and new clients get the top-class support and services. The deal has not yet been closed but it is expected to be finalized in the first quarter of 2016.


We all know AWS is growing fast. Very fast. This acquisition makes one thing clear: AWS will keep growing, both by innovation from within and by acquiring other companies.

Have a look at the Youtube channel to get an impression of NICE:

Screen Shot 2016-02-09 at 20.58.16

AWS is easing game development & runs gameservers

Today Amazon AWS launched Amazon Lumberyard and GameLift

Screen Shot 2016-02-09 at 20.54.29

Amazon Lumberyard is a free game engine integrated with AWS and Twitch. Game developers get a growing set of tools to create high quality games, engage massive communities of fans, and leverage the vast compute and storage of the cloud.

Screen Shot 2016-02-09 at 20.45.12

Game developers can use Amazon GameLift to deploy and scale multiplayer games. AWS promise is to lower the technical risks. Even developers without backend experience, Amazon GameLift should allow you to run multiplayer games in the cloud.


Amazon GameLift is a fully managed service for deploying, operating, and scaling multiplayer game servers on AWS without any upfront costs. A game-developer should be able to deploy a game server in just minutes, eliminating hours of software development.

According to Amazon, GameLift should offer several benefits:

  • Integration with Lumberyard
  • Amazon EC2 resources that you can use to support your game sessions; for more information, see Scaling Amazon Elastic Compute Cloud (Amazon EC2) Instances
  • Reduces the engineering and operational effort to deploy, operate, and scale game servers
  • Reduces the risks involved in fluctuating player traffic
  • Allows you to pay only for the capacity you use, with no long-term commitments
  • Ability to scale server hardware based on player demand
  • Built-in metrics and logs
  • Amazon GameLift console to easily review game and player session data

Currently only the following regions can be used:

Screen Shot 2016-02-09 at 20.54.52

I am not a gamer, nor a game developer. But it’s likely that these tools will be leveraged by a lot of game-developers, both professionals and hobbyists: GameLift  allows you to start small – a free tier is included in the offering.

All the documentation needed is present at launch time, a lot of instruction video’s are on youtube already. Amazon even published a tutorial on how to build a multiplayer sample project to get started.


the death of Code Spaces – company deleted on AWS

Code Spaces was a firm that supplied web designers’ a solution like github, utilizing Git or Subversion. It has been in business for seven years, and also, it had no scarcity of clients. But it’s all over currently; an assailant killed the business.


We often talk of datacenter security, data backups, as well as disaster recovery. We could strengthen our walls as  ideal as we could with the sources we have, and also in the vast bulk of circumstances, that will certainly be enough. In some cases, nevertheless, it’s not sufficient.

Code Spaces was constructed primarily on AWS, utilizing S3 storage and EC2 server to name a few. According to the message on the Code Spaces’ site, an enemy obtained the credentials to the firm’s AWS control panel. Code Spaces was being blackmailed; the attacker required cash in exchange for providing control back to Code Spaces.

The strike has rightly ruined Code Spaces. It is a direct contrast to an individual breaking right into a workplace structure late during the night, requiring ransom money, after that tossing explosives right into the information facility if the needs were not satisfied. The only distinction is that it’s a dreadful whole lot less complicated to permeate a cloud-based system than to breach a business information center.


Code Spaces had data backups as well as disaster recovery solutions, yet those were all apparently managed from the same AWS account. Almost all AWS services have been deleted from their AWS account, destroying the company. The business stated that some information still continues to be, and also it’s collaborating with consumers as it could to give accessibility to exactly what’s left.

This is the type of tale that needs to strike all of us hard because it might indeed occur to you as well as me. It strengthens the suggestion that spreading your solutions over different Cloud’s platforms is a good idea.

Perhaps you need to make use of a couple of various suppliers if you run cloud solutions. You need to disperse your solutions throughout numerous geographical places, if whatsoever feasible, and also invest a couple of additional dollars occasionally on precaution past straightforward server circumstances imaging. When every little thing else is running in the cloud, you ought to have off-site data backups, this need to be non-negotiable though it’ll amount to a substantial cost.

The moment is best for third-party cloud data backup suppliers to ignite their bullhorns. This very unfortunate story ought to get them greater than a couple of consumers.

To the people behind Code Spaces that are doubtless still reeling from this unconscionable strike, you have my sincerest acknowledgments. May you take some slight relief in understanding that your bad luck could aid others to prevent comparable destinies.


AWS has a whitepaper covering  security best practices that will help you define your ISMS and build a set of security policies and processes to your data and assets in the AWS Cloud.

top 5 new features from Amazon Web Services

Amazon Web Services (AWS) are increasingly taking an edge over other web services. This is due to their consistency in conveying new AWS features. All these new features invented are customer oriented innovations. They tend to deliver value, save money and enhance easy usage of the “Web of Services”.

Here are five of the new features AWS has updated.

  1. AWS WorkSpaces enhancements

Amazon WorkSpace has about three updated new features all geared to make the web service more interesting.

  • Audio-In – Your WorkSpace has been improved in that you can be able to make and receive calls using the common communication tools such as Skype, Lync and WebEx.
  • Saved Registration Codes– It is quite easy now to save several registration codes in one particular client application.
  • High DPI Device Support- Now you can automatically scale the in-session practice of Workspace to look like your local DPI settings. The reason for this is to support the increasing acceptance of high DPI (Ultra HD, QHD+ and Full HD) displays.


  1. AWS CodePipeline now supports Lambda 

Software release pipelines that are modeled in AWS CodePipeline can now be invoked with AWS Lambda functions. This will help you to specify activities in your pipeline’s stages that can generate functions stated by your code. This allows you to customize your software release pipeline.

Codepipeline can be defined as a steady delivery service that tests, builds, and arrays your code every time a code is changed, centered on the release procedure models you state. With Lambda, you can run a code without managing or provisioning servers. What you are only required is to upload your code and Lambda will take care of everything needed to run your code.

  1. AWS CloudFormation adds Override for Rollbacks 

Even if the rollback has failed, it is now possible to instruct AWS CloudFormation to continue rolling back an update to your stack. Initially, this action could not be carried out hence one was required to ask help from the customer support.

Some of the factors that lead to failed rollback include insufficient permissions, resources that have not stabilized, limitation errors, or changing a resource in your stack outside of CloudFormation.

  1. AWS IoT added features

The AWS IoT Device Gateway has the ability to now support MQTT over WebSockets. Actual                                    mobile users and web applications that interact over WebSockets can easily measure to millions of simultaneous users. WebSockets can be utilized together with Amazon Cognito in order to verify all end-users to your devices.

AWS has also included support for custom keepalive intervals. You can easily specify the intervals with which every connection must be kept open if there are no messages received, but this is for apps and devices that use open connections to AWS IoT.

Lastly, the AWS IoT console has been enhanced making the process to start even quicker. The console can now be used to publish and subscribe to MQTT messages without the help of a physical device or MQTT client. The console can still be utilized to configure logging of your AWS IoT action to CloudWatch Logs.

  1. AWS new Web Application Firewall functionality

It is easy to configure AWS WAF to allow, monitor or block requests based on the records in HTTP request bodies. This segment of a request contains any additional data that you may desire to send to your web server inform of HTTP request body.

It is also possible to set size constraints on specific parts of the requests which allow AWS WAF to permit, block, or count web requests based on the extents of the requests such as URIs, strings, query, or request body.

What makes AWS lead amongst its competitors?

In the year 2015 we had Magic Quadrant for Cloud Infrastructure place Amazon Web Services in the “Leaders” Quadrant. They went further and rated AWS as an industry that has fulfilled its vision and has the highest ability to execute ideas.

The secret behind this is in their role to maintain their position in cloud with a faster rate of innovation, increasing customer and partner environment and a goal to efficiently operate at a massive measure.

They have worked closely with huge industries ranging from Siemens to Nike, Conde Nast to Intuit with the aim of assisting them transform their business impressively.

Amazon Operating Income

The first half of 2015 Amazon Web Services recorded a 19% operating income margin profile. This was high compared to Amazons domestic and International amounts of 4.5% and -0.6% respectively.

With these kinds of profit outlines, Amazon only needs to increase its AWS division to $5.83 billion within half-year to make a run rate of $11.7 billion yearly. With the same clip of improvement, retailing could become an essential business to AWS—from a financial standpoint.

AWS is a very significant business for Amazon. It has proved to be very lucrative and with the current pace of innovation and improvement, Amazon will continue to pose a big challenge to its competitors.

AWS is now at $ 10 billion run rate

Short update from Amazon’s Q4

Amazon’s cloud division AWS continues to grow, impressing analysts since Amazon first started breaking out results last spring.

AWS did $2.4 billion revenue in Q4, up from $2.1 billion in Q3.

2015 proved to be a big year for AWS in general as it rolled out:

  • 722 new services and features over the course of the year — a 40 percent increase from 2014.
  • AWS has expanded to 32 Availability Zones in 12 regions
  • Plans to add 5 regions
  • 11 additional Availability Zones are scheduled



All AWS related quotes from the press release about Q4 on Amazon investor relations page:

  • Amazon Web Services (AWS) announced the launch of its Asia Pacific (Seoul) Region in Korea and its plans to open a new region in Canada. The AWS Cloud is now available from 32 Availability Zones across 12 geographic regions worldwide, with another five AWS Regions (and 11 Availability Zones) in Canada, China, India, Ohio, and the U.K. expected to be available in the coming year.
  • AWS announced the general availability of Amazon WorkMail, a secure, managed business email and calendaring service with support for existing desktop and mobile email clients.
  • AWS announced the general availability of AWS IoT, a managed cloud platform that lets billions of connected devices — such as mobile phones, cars, factory floors, aircraft engines, sensor grids, and more — easily and securely interact with cloud applications and other devices. AWS IoT can support trillions of messages, and can process, route, and keep track of those messages to AWS endpoints and other devices reliably and securely, even when the devices aren’t connected.
  • AWS announced AWS Certificate Manager (ACM), a new service that enables customers to easily provision, manage, and deploy Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet. Certificates, which typically cost between $45 and $499, are provided to AWS customers free of charge through ACM and are verified by Amazon’s certificate authority, Amazon Trust Services.
  • AWS launched EC2 Scheduled Reserved Instances, allowing customers to reserve capacity for their applications that run on a part-time, recurring basis with a daily, weekly, or monthly schedule over the course of a one-year term.
  • AWS announced 722 significant new services and features in 2015, a 40% increase over 2014.

the history of AWS CodeDeploy

Must read!

The background story about Apollo aka AWS CodeDeploy:

“The Story of Apollo – Amazon’s Deployment Engine”,  written by Amazon CTO Werner Vogels on his blog: 

Deploying software to a single host is easy. You can SSH into a machine, run a script, get the result, and you’re done. The Amazon production environment, however, is more complex than that. Amazon web applications and web services run across large fleets of hosts spanning multiple data centers. The applications cannot afford any downtime, planned or otherwise. An automated deployment system needs to carefully sequence a software update across a fleet while it is actively receiving traffic. The system also requires the built-in logic to correctly respond to the many potential failure cases.

CodeDeploy allows you to plug-in your existing application setup logic, and then configure the desired deployment strategy across your fleets of EC2 instances. CodeDeploy will take care of orchestrating the fleet rollout, monitoring the status, and giving you a clear dashboard to control and track all of your deployments. It simplifies and standardizes your software release process so that developers can focus on what they do best –building new features for their customers. 

Never worked with CodeDeploy before?

Start with this 5 minute video. It will show a sample CodeDeploy deployment (flat html files hosted on S3 deployed to Apache web servers on EC2).



AWS block storage performance compared

(this post does only covers EBS, not S3 (object based storage)

We have three storage type on Amazon’s AWS platform for EC2 virtual machines:

  1. General Purpose SSD
  2. Provisioned IOPS
  3. magnetic disk

Magnetic volumes

These provide the lowest cost per GB of all EBS volume types. Magnetic volumes are backed by magnetic drives and are ideal for workloads where data is accessed infrequently, and scenarios where the lowest storage cost is important. Magnetic volumes provide 100 IOPS on average, but can burst to hundreds of IOPS.

Head and platters detail of a hard disk drive Seagate Medalist ST33232A
Head and platters detail of a hard disk drive Seagate Medalist ST33232A

General Purpose SSD

General Purpose (SSD) volumes are the default EBS volume type for Amazon EC2 instances. General Purpose (SSD) volumes are backed by Solid-State Drives (SSDs) and are suitable for a broad range of workloads, including small to medium-sized databases, development and test environments, and boot volumes. General Purpose (SSD) volumes are designed to offer single digit millisecond latencies, deliver a consistent baseline performance of 3 IOPS/GB to a maximum of 10,000 IOPS, and provide up to 160 MBps of throughput per volume. General Purpose SSD volumes smaller than 1 TB can also burst up to 3,000 IOPS. I/O is included in the price, so you pay only for the capacity.


Provisioned IOPS

Provisioned IOPS volumes – backed by Solid-State Drives (SSDs) – are suitable for applications with I/O-intensive workloads such as databases.

Provisioned IOPS volumes are designed to offer single digit millisecond latencies, deliver a consistent baseline performance of up to 30 IOPS/GB to a maximum burst capacity of 20,000 IOPS, and provide up to 320 MBps of throughput per volume. Additionally, you can stripe multiple volumes together to achieve up to 48,000 IOPS or 800MBps when attached to larger EC2 instances.

To maximize the benefit of Provisioned IOPS volumes,  EBS-optimized EC2 instances are recommended. With EBS-optimized instances, Provisioned IOPS volumes can achieve single-digit millisecond latencies and are designed to deliver the provisioned performance 99.9% of the time.

Compared to some HDD’s and SSD’s

  • a 6TB SATA HDD can deliver about 100 IOPS                 =    0.016 IOPS/GB
  • a 600GB SAS HDD typically offers 160 IOPS                    =    0.26 IOPS/GB
  • a 100GB SATA HDD typically offers around 100 IOPS      =     1 IOPS/GB  < non existent disk used for reference
  • a 512GB SATA SSD can deliver 84.000 IOPS                    =    164 IOPS/GB
  • a 512GB 12GB/s SAS SSD can deliver 120.000 IOPS    =     234 IOPS/GB
  • a 400 GB NVMe PCIe SSD can deliver 290.000 IOPS    =     725 IOPS/GB
  • a 256GB 3D Xpoint SAS SSD  461.000                                =     1800 IOPS/GB


Customers often use capacity as a metric when comparing storage solutions. Gamers tend to focus and benchmark on throughput (MB/ps) while this is hardly relevant for gamers nor servers and storage systems because random IO is typically the bottleneck. The most important metric is the amount of random read or write operations per second. That’s why storage and virtualisation specialists tend to talk about IOPS.

(of course not all IO’s are created equal, but that’s another topic)

An even better metric to compare storage solutions or storage media would be the amount of IOPS per GB, so we calculated the IOPS per GB metric for every disk type.

How not to: setup Amazon WorkMail & WorkDocs

Tried to setup Workman & WorkDocs after viewing this 30 minute video:

Didn’t want to use an existing Active Directory server  and went for Simple AD:

Simple AD is a Microsoft Active Directory–compatible directory from AWS Directory Service powered by Samba 4 (developed with Microsoft’s assistance).

Simple AD supports commonly Active Directory features like user accounts, group memberships, domain-joining Amazon Elastic Compute Cloud (Amazon EC2) instances running Linux & Microsoft Windows, Kerberos-based single sign-on (SSO), and group policies. This makes it even easier to manage Amazon EC2 instances running Linux and Windows, and deploy Windows applications on AWS.

WorkMail is really easy to setup if you can change the DNS settings of your domain yourself. Since WorkMail isn’t available in all AWS regions – you should note the following:

A Simple AD running on AWS only works within a region. So make sure you setup WorkMail & WorkDocs in the same region – in order to use the same public URL and the same user accounts.

DNS setup for WorkMail : an example of a DNS setup for WorkMail

HowTo: migrate your DNS hosting to Route 53

Today we have migrated the DNS hosting of the domain to Amazon AWS Route 53. It’s easy, let’s have a look at the process.

For several services of AWS, you have to choose a region . You don’t for Route 53, so it’s a global service.

Screen Shot 2016-01-04 at 16.49.54

AWS allows you to transfer a domain to Route 53. This is the easy way: you don’t have to recreate your records if you use this wizard.

But if you like you can keep your current registrar. We wanted to keep using Transip because they are cheaper as a registrar and it’s practical to have one place to administer all domain names.

Use the following method in case you want to keep using your current registrar:
1. create your zone at Route 53
2. create your records / or import a zone file
3. change your name servers at your registrar (in this example

DNS zone before the change:

Screen Shot 2016-01-04 at 16.43.37

DNS zone after the change to Route 53:

Screen Shot 2016-01-04 at 16.43.21

Projected costs: $0.50 a month…

Introduction movie Amazon Route 53

8 minute intro movie on Route 53

Amazon Route 53 has a simple web-services interface that lets you get started in minutes. Your DNS records are organized into “hosted zones” that you configure with Route 53’s API. Route 53 provides a simple set of APIs that make it easy to create and manage DNS records for your domains. You can call these directly; all this functionality can also be accessed via the AWS Management Console.

The sheer size of AWS put in perspective

Amazon Web Services did about $7 billion of revenue in 2015. Sounds like a lot, but I can only comprehend big numbers like that in a comparison. So let’s try to put those numbers in perspective:

As I am familiar with technology and other toy companies, let us compare this $7 billion of AWS with some other companies like VMware, NetApp, Avaya and Toys ‘R’ Us :

Screen Shot 2016-01-03 at 09.41.54

revenue per year in  100 million USD

This $7 billion is only a small part of Amazon of course. Mother-ship Amazon’s total revenue is $100 billion, comparable with Microsoft doing $90 billion :Screen Shot 2016-01-02 at 22.44.54                                                          Microsoft in blue & Amazon – last 2 year

To put those numbers in perspective:

The revenues of Amazon and Microsoft are comparable with the GDP of countries like Ecuador, Slovakia and Morocco  (with 16, 5 and 33 million inhabitants).

Gartner estimated recently that Amazon Web Services offers 10 times as much computing capacity as the next 14 players in the market, combined.

(yes 10 x all the other players in the Magic Quadrant including Microsoft and Google)

aws gartner mq

Due to it’s pace of growth AWS is on track to be a $50 billion business by 2020. That’s about the size of Cisco and Coca-Cola.

Screen Shot 2016-01-03 at 10.34.44


The amount of servers is unknown. What we do know:

  • in 2014 AWS had 1.4 million servers (implies a profit of 3000 USD per server in 2014)
  • Garner estimates AWS to have more then 2 milliion servers
  • AWS servers are spread over 28 zones
  • typical datacenter has over 80.000 servers

So let us conclude:

AWS is utterly massive.

SSH login on EC2 Linux without .pem file?

By default one has to use a .pem file to SSH into a Amazon linux instance. This is a pretty good idea, and safer than a password. But sometimes it’s more practical to use a username and password.  You still can, this is how:


Add your downloaded .pem file to you ssh store on Linux and Unix systems like OS X:

ssh-add /path/to/pemfile.pem

Login without .pem file? Follow these steps:

  1. login using your .pem file (ssh -l pemfilename.pem ubuntu@publicip (or
  2. Create a new user to be used to login with a password (sudo useradd -s /bin/bash -m -d /home/adminbert -g root adminbert)
  3. Set a strong password (sudo passwd adminbert)
  4. configure SSH by editing the config file. Change this PasswordAuthentication from no to yes (sudo nano /etc/ssh/sshd_config)
  5. Restart SSH (sudo service ssh restart)

You can now login using a username and password.

(ssh username@publicipaddress / or

AWS Solutions Architect – Education options

No time to read? Recommended training is – use qwikLABS as backup

update: When you apply for AWS Activate you get 80 credits for Self-paced labs ($80 value). Go to 

There are a lot of options to educate oneself these days. This post will focus on online training options, because I just love CBT’s. I love it because you can set the pace yourself, skip parts you are already familiar with or stop the instructor if you want to Google or try something yourself. Learning on the go is of course another big benefit.

What again?


To be clear: I want to pursue two certifications from Amazon:

The AWS Certified Solutions Architect – Associate exam is intended for people with experience designing distributed applications and systems on the AWS platform. Exam concepts you should understand for this exam include:

  • Designing and deploying scalable, highly available, and fault tolerant systems on AWS
  • Lift and shift of an existing on-premises application to AWS
  • Ingress and egress of data to and from AWS
  • Selecting the right AWS service based on data, compute, database, or security requirements
  • Identifying proper use of AWS architectural best practices
  • Estimating AWS costs and identifying cost control mechanisms


The AWS Certified Solutions Architect pro – Professional exam validates advanced technical skills and experience in designing distributed applications and systems on the AWS platform. Example concepts you should understand for this exam include:

  • Designing and deploying dynamically scalable, highly available, fault tolerant, and reliable applications on AWS
  • Selecting right AWS services to design and deploy an application based on given requirements
  • Migrating complex, multi-tier applications on AWS
  • Designing and deploying enterprise-wide scalable operations on AWS
  • Implementing cost control strategies


Amazon recommends qwikLABS so I looked at this first. They use a weird credit system – you buy credits and then pay for every module. The Architect Associate level course will cost you about 76 USD.

They use a weird credit system – you buy credits and then pay for every module. The complete Architect Associate level course will cost you about 76 USD. Not expensive at all – for 5 hours of content. You can also buy only a part for about 10 points.

There is also a lot of free content. Currently they have about 30 free introduction labs.
this is how it works:
– you apply for a lab
– you will get a lab instruction PDF
– then you launch the lab <- in fact a AWS account is created for you
– login to AWS and play around following the PDF instructions

+ What I do like is the hands on experience, and you don’t have to use your own AWS account and credit card.

+recommended by Amazon

-not suitable for learning on the go

update: When you apply for AWS Activate you get 80 credits for Self-paced labs ($80 value). Go to 

CBT nuggets

Love them, came up in my mind first to look at. They have a lot of content on Amazon web services, but they only cover the Associate level solution architect for now. So I will have to look further, because I want to do both the Associate and Pro level to become a “real nimbus architectus”. CBT Nuggets is an expensive option: at least a 100 USD a month. Not recommended.


Formerly known as TrainSignal; again a big name in the industry. I happen to have a corporate account so it would be nice if they offer both the Associate and Pro AWS Solution architect training. Pluralsight is not expensive at all (about 30 USD a month for an individual) and they offer about 20 courses on AWS. However they do not offer a course directly meant to do the AWS solution architect certification. Not an option.


Found CloudAcademy, never heard of it before. Looks good; they offer the courses I am looking for:

Screen Shot 2016-01-02 at 10.57.35

The courses contain course material, labs and quiz modules. For the Associate level certification 11 labs and 10 video courses with a quiz at the end. They offer subscriptions starting at 25 USD a month. You can get started easily: 7 days for free without looking up your credit card. Looks good to me!

After a few hours you will get bored by these video lessons. The content is there, it’s technically correct but boring. Not sure why. Maybe a lack of personality or the intonation.

Found via 11 hours of video, 74 lessons and 230 quiz questions for a fixed price. At the moment of writing there is a discount, the course costs only 27 EURO.Screen Shot 2016-01-09 at 09.38.39

Acloud.Guru also offers the AWS Solutions Architect professional course.
Screen Shot 2016-01-09 at 09.38.48

+study on the go

+good instructor – enthusiasm & personality

+active community

+course is being updated all the time


So highly recommends


recommends qwikLABS if you need some extra material, or when you need guidance with the hands-on labs.