During last year’s AWS re:Invent conference, AWS announced their plan to support larger EBS volumes - at the time the maximum EBS volume size was 1TB. If a user needed to create volumes and file systems larger than 1TB, they had to rely on drive aggregation techniques like software RAID or drive concatenation. This increased complexity, costs and risk.
Today, AWS announced the availability of the promised larger and faster EBS volumes. Now you can create:
Brandorr Group is an Advanced Amazon Web Services Consulting Partner with decades of experience architecting, automating, and managing cloud infrastructure using AWS best practices, which includes database backups and disaster recovery.
We help our clients successfully execute their cloud strategy by providing architecture and implementation services with 24x7x365 oncall emergency response. We’d love to help with your automation/scaling needs, contact us today.
Amazon finally released EBS volumes that are larger than 1TB, a feature many have been waiting for, for years. Read the announcement for more details, but the quick summary is, that you can now provision SSD (gp2 and io1) EBS Volumes up to 16TB in size, with up to 20,000 Provisioned IOPS:
We realized we needed an External Node Classifier (ENC) for our Puppet environments in 2011, after it became clear that iClassify would no longer be a viable solution for the future. (The author of iClassify, Adam Jacob, had moved on to write Chef.) After evaluating our options, we narrowed it down to Puppet Dashboard, and Foreman. It turned out at the time, Puppet Dashboard wasn't really an ENC, and was largely just a reports processor and dashboard, for monitoring the status of Puppet runs. We looked at Foreman, and functionality-wise, even in 2011, it had a fully featured API, and full ENC support, not to mention bare-metal provisioning options, which we didn't need at the time. Shortly after migrating to Foreman we realized that we needed a way to pull lists of hosts out of Foreman for other management purposes. This led to us writing a tool, that eventually got fleshed out into the first official Foreman CLI, called "foremancli". It was fairly basic, in that it could only pull information out of a foreman server, but it did meet our most pressing needs. We started development on foremancli's successor, hammer, but we got busy, and with RedHat really building up their Foreman team and having human-power to spare, we handed off further development of Hammer to RedHat. At some point along the way, The theforeman.org infrastructure needed to grow, and we offered to host a number of their servers, including their build environments, website and wiki. (This was around the time that the website was refactored to actually not look terrible.) We still sponsor the project in this way. Currently, we also still contribute to the project via bug reports, and testing pre-release builds. Over the years we have organized many Foreman talks in the New York City area, and have generally tried to support this great project any way we can.
A few things of note
I wrote a CLI for Foreman, that I creatively named foremancli.
It's now available as a gem, for easy installation.
To install, simply run:
$ gem install foremancli
Creating a personal apt repo is a great idea for managing custom packages while still utilizing apt's management and dependency abilities. (We are not going to cover all the specifics of why one would want to create a custom apt repo, that should be self-explanatory.) Creating a secure apt repo accessible via apt is relatively straight forward, three items need to be setup: the repo dir, httpd, and GPG to sign the packages.
Setting up the repository directory
The first step is to create the directory structure.
In the conf directory there must be a configuration file called distributions.
For the sake of simplicity, we are only defining one component to work with, called main. It works just like the main component of official repos from vendors like Debian and Ubuntu. (Note: for the time being we are not signing our packages just yet. Soon!)
The other directories will be created as needed by the tool that adds packages to the repo, reprepro. Therefore you do not need to worry about them.
Setting up httpd
Regardless of what httpd you choose to use, you want to allow for HTTP access to the /home/jason/apt-repo directory.
For apache use the following directory defintion:
Adding packages to repo
Adding packages to the repo is very easy:
Use apt-get to fetch packages with your new repo
You must tell your clients machines to use the new repo in order to fetch packages from it:
If your packages are sane (which will be covered in another blog post!), you should have been able to install your packages without any problems.
Almost .... did you get an error about your repository being unsigned? Read on!
Signing the repo
Dpkg packages do not have the ability to be individually signed the way RPM packages do. Instead, the package publisher must sign the contents of the repo the package comes in order to validate the authenticity of the package.
The first step is to create a GPG key used to sign the repo. In the section below, the material in CAPS is example user input.
If you haven't done so already, secure the permissions of your .gnupg directory.
Now lets generate that key!
We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy.
You should be able to see the key now:
It is the latter part of the pub key ID we are concerned about, in this case EE519117
We want to add the following line to the distributions file:
The next time you add a package to the repo, add it with the following command:
Adding the package will cause the repo to be signed with the key. If you do not want to wait till you've added a package to sign the repo, you may do the following:
Client configuration for the new key
You need to publish your key and have the clients install it in order to use it. You can extract it like so:
On your client machiness you can install it like so:
You should be able to install packages with out any issues regarding security signatures!
Future blog posts:
Multi component repos
Pass-phrase less repos (dangerous! but easy to manage in trusted environments)
Custom dpkg packages
Easy backporting of dpkg packages
Brandorr Group LLC is a one-stop cloud computing solution provider, helping companies manage growth and ship new projects using cloud and scalability best practices.