Google+ Badge

Wednesday, March 30, 2016

Let's Build Something: Elixir, Part 2 - Supervising Our GenServer

In Part 1 we built a simple GenServer and worked our way through making it write a timestamp to a local file.

During that process, we ran into a usage issue - we had to explicitly start the GenServer before we could invoke its write_timestamp/2 function. It makes sense, but it's inconvenient and dependent on human interaction. We should (and can!) fix that.

Automatically Starting Our GenServer


The file lib/stats_yard.ex is the main entry point for our app, so that seems like a good place to add a call to StatsYard.TimestampWriter.start_link/0 (ignoring the supervisor bits for now):


Now to verify that it acts as we'd expect:


Much better! Now let's see what happens when it dies, as most everything on a computer is likely to do at the worst possible time.

There are a few ways to easily kill a process in Elixir, but I always enjoy throwing a value into a function that it can't handle and watching the poor process get shredded. I'm a jerk like that. Let's pass a tuple via the timestamp argument and see what happens:


Excellent! Our GenServer is dead as a hammer. Soooo now what?

Well, we can restart it manually in our iex session, but that's a non-starter when you consider your app running unattended. We could also just drop out of our session and then start it back up, but that's still pretty lame.

Enter supervisors! If we take another glance at the chunk of code we saw above, paying special attention to the bits I said to ignore, we'll see the structure we need to use to start up our GenServer as a supervised process:

  • Line 11-14: A list of your Supervisor's child processes, which can either be regular process or another Supervisor (sub-Supervisor)
    • Each item in this list will contain the module name for a process and will call that module's start_link function
    • Notice that the arguments passed to your start_link function are provided here as a list. This is done so that the Supervisor's start_link function doesn't have to handle arbitrary argument counts (arities) based on your process's start_link arity
  • Line 18: A list of Supervisor options. For now, the provided values are more than sufficient, and we'll talk more about what each one means in a later post
  • Line 19: The start_link call for your Supervisor process, which (shockingly!) looks awfully similar to the call we make for our own GenServer
Now let's move our GenServer startup bits into the Supervisor's children list, and see what we can make of it:


And the test:


Woohoo! Our application is now considerably more "self-healing" than it was before. We can start up iex, and immediately start writing timestamps without explicitly starting our GenServer. Then we can crash the GenServer process, and immediately continue on writing timestamps without having to explicitly restart the process. Excellent!


Experiments

If you're the curious sort (and you really should be! It's awesome!), you may want to try and poke around with this a bit more by trying to crash our GenServer several times and seeing if the behavior persists. See if you can find a limit to what we've done here, and see if the docs for Elixir's Supervisors can guide you toward a fix.

Next Time

We've got a nice simple GenServer automated and supervised, but there's no guarantee that our next batch of code changes won't break something in some unforeseen way. We'll poke around with ExUnit next time, and see if we can start a basic unit test suite for our project.

For now, you can peruse the source for this post at: https://github.com/strofcon/stats-yard/tree/lets-build-2

Monday, March 21, 2016

Let's Build Something: Elixir, Part 1 - A Simple GenServer

I'm an "ops guy" by trade. That's my main strength, tackling things at all layers of a SaaS stack to fix broken things and automate my way out of painful places. That said, I find that my mind wanders into the realm of, "wouldn't be cool if I could build <insert cool thing here>?" I like to tinker, and I love solving problems, so I tend to poke around with some development projects on the side for fun and for my own education. I also find that I'm better at my "ops day job" for having understood some of what makes the software development machine tick.

I've been playing with Elixir a bit lately, and it's incredibly attractive for a lot of reasons that I won't get into here. I learn faster when others peruse my code, and I like sharing what I've learned as I go along... Sooooo, let's build something! 


Erlang (and by extension, Elixir) lends itself very well to highly-concurrent and distributed systems, so I want to build something that leverages that strength and bends my brain in some weird ways. To that end, I want to build a time series database. A very simple one, mind you, but one that provides some measure of useful functionality and takes advantage of the things Elixir, OTP, and BEAM bring to the table.

This first part will be pretty simple - we'll spin up a new project with mix, define a GenServer that writes the current timestamp to a file, supervise it, and then manually test out the supervision. We'll get into exunit and such next time.

Note: My bash commands aren't terribly copy-pasta-friendly in these posts, and I'm OK with that. By simply pasting my actual shell output, you can more easily figure out what directory I'm in at any given time. Fortunately we don't spend much time in bash, at least not in the code snippets.


Create a New Project

I'm going to call my awesome new world-changing time series database "StatsYard". We'll use mix to get kicked off:


This lays down the bones we need to start building stuff. (Note: Even though we named our project "stats_yard", the actual namespace will be StatsYard as shown below.

Define Our GenServer

Let's create a GenServer that writes the current timestamp plus some message to a file upon request.

Mosey on over to stats_yard/lib and create a new directory with the same name as our project, stats_yard. Within that directory, create a file named timestamp_writer.ex :


Crack that bad boy open and let's build a GenServer!

I like to name my files the same as the module they contain wherever possible, but it isn't required - I just find it eases troubleshooting. The convention I've seen everywhere else is to name your Elixir files after the modules they contain, but in all lower case and with words separated by an underscore ( _ ). Now let's define our module StatsYard.TimestampWriter:



Not too much special here if you're generally familiar with GenServer:
  • Line 8: A public function to make it easier to invoke the GenServer's callbacks
  • Line 12: start_link is the function we'll use to startup our process running the GenServer
  • Line 20: init is called by start_link and is the way we set the initial state of our GenServer (for now, just the integer 0)
  • Line 24: Our cast callback for actually doing the work. We could have made the timestamp a value that's calculated every time we cast to the GenServer, but for the sake of a TSDB we'll want to be able to accept arbitrary timestamps, not solely "right now" timestamps
Let's see if it works! From the top level of our project, we'll hop into iex:


Now let's run the appropriate commands and hopefully we'll see a timestamp written to the file we specify:



Oops. We must have missed something - GenServer.do_send/2 is unhappy with our arguments. This particular function accepts two arguments: the Process ID (PID) of the GenServer whose callback you're trying to invoke, and the body of the message you want to send. In our case, our public write_timestamp/2 function is actually not calling a private function in our GenServer, but is instead sending a message to the GenServer's PID. That message contains a tuple that the GenServer should pattern match appropriately upon receipt.

So, where did we go wrong? The message payload ( {:write_timestamp, "/tmp/foo.tstamp", 1459134853371}}certainly looks correct, so it seems to be the first argument, the GenServer's PID.

When you name a GenServer you're effectively mapping an atom to a PID, which allows you to reference the process without having to know it's PID in advance. In our case, __MODULE__ equates to :"StatsYard.TimestampWriter", which should then map to our PID.... Oh, right! We don't have a PID, because we never started our GenServer process. Easy fix!




Now we just need to check our output file and make sure it actually did what it was supposed to:



Success!

Next Time

That's it for now, nothing too interesting just yet. Next time we'll see what happens when our GenServer dies, and what we can do about that.

For now, you can peruse the source for this post at: https://github.com/strofcon/stats-yard/tree/lets-build-1

Tuesday, March 15, 2016

Sysdig Cloud - Monitoring Made Awesome Part 1: Metrics Collection

Just in case you hadn't noticed before now, I'm a tiny bit obsessed with ops metrics. Most of my preoccupation with metrics seems to stem from the fact that... well... it's hard. "Metrics," for all that term entails, can be a difficult problem to solve. That might sound a bit odd given that there are a thousand-and-one tools and products available to tackle metrics, but with a sufficiently broad perspective the problem becomes pretty clear.

I've worked with a lot of different metrics tools while trying to solve lots of different problems, but I always tend to come across the same pain points no matter what tool it is that I'm using. As a result, I'm pretty hesitant to really recommend most of those tools and products.

Fortunately, that may very well be changing with the advent of Sysdig Cloud. Sysdig has really nailed the collection mechanism and is doing some great work on the storage and visualization fronts. This is the first in a series of posts describing how I think Sysdig is changing the game when it comes to metrics and metrics-based monitoring.

Disclaimer: I am not currently (or soon to be), employed by Sysdig Cloud, nor am I invested in Sysdig Cloud. I'm just genuinely impressed as an ops guy who's got a thing for metrics. I've been using Sysdig for a few months, and I only plan to brag on features that I've used myself. I'll also point out areas where Sysdig is a bit weak and could improve, because let's face it - no one's perfect. :-)

The Problem(s)

There are essentially three stages to the life-cycle of a metric: Collection, Storage, and Visualization. Believe it or not, they're all pretty hard to solve in light of the evolving tech landscape (farewell static architectures!). This post will tackle Collection, and is a bit long since they do it so well.

When Metrics Collection Gets Painful

For metrics to be of any use you need some mechanism by which to extract / catch / proxy / transport them. The frustrating part is that there are actually several layers of collection that need to be considered. If we take the most complex case - a container-based Platform as a Service - three categories of metric should do the trick: host, container, and application. Handling all of these categories well is difficult - I tried with Stat Badger, and it was... well... a bit unpleasant.

Host metrics are generally fairly easy to collect and most collectors grab the same stuff, albeit with varying degrees of efficiency and ease of configuration.

Container metrics aren't too terribly hard to collect, though there are certainly fewer collectors available. The actual values we care about here are usually a subset of our host metrics (CPU % util, memory used, network traffic, etc) scoped to each individual container. This starts to uncover the need for orchestration metadata when considered within a PaaS environment.

Application metrics can easily become the bane of your existence. Do we have a way to reliably poll metrics out of the app (for example, JMX)? If so, does our collector handle this well or do we need to shove a sidecar container into our deployments to provide each instance with its own purpose-built collector? If not, do we try to get the developers to emit metrics in some particular format? Should they emit straight to the back-end, or should they go through a proxy? If they push to some common endpoint(s), how best can we configure that endpoint per environment? Then once we've answered these questions, how on earth do we correlate the metrics from a particular application instance with the container it's running in, the host the container is running on, the service the app instance belongs to, the deployment and replication / HA policies associated with that service, and so on and so on...???

Enter the Sysdig agent.

How Sysdig Makes Metrics Collection Easy

The Sysdig agent is, right out of the gate, uncommon in its ambition in approaching to tackling all layers of the collection problem.

Most collectors rely exclusively on polling mechanisms to get their data, whether it's by reading /proc data on some interval, hitting an API endpoint to grab stats, or running some basic Linux command from which it scrapes details. This works, but is generally prone to errors when things get upgrades / tweaked, and can be fairly inefficient.

Sysdig does have the ability to do some of those things to monitor systems such as HAProxy and whatnot, but that's not its main mechanism. Instead, the Sysdig agent watches the constant stream of system events as they flow through the host's kernel and gleans volumes of information from said events. Pretty much anything that happens on a host, with the exception of apps that run on a VM such as the JVM or BEAM, will be result in (or be the result of) a system event that the host kernel handles. This has a couple of huge benefits: it's very low-overhead, and it's immensely hard to hide from the agent. These two core benefits of the base collection mechanism allow for a number of pretty cool features.

Fine-Grained Per-Process Metrics

Watching system events allows the Sysdig agent to avoid having to track the volatile list of PIDs in /proc and traverse that virtual filesystem to get the data you want. All of the relevant information is already present in the system events and this opens the door to some really nifty visualization capabilities.

"Container Native" Collection

By inspecting every system event that flows by, there's no middle man in snagging container metrics. No need to hit the Docker /stats endpoint and process its output, no worries about Docker version changes breaking your collection, and ultimately no need to relegate yourself to Docker for your container needs. This also combines beautifully with the fine-grained per-process metrics to give visibility into the processes within your containers in addition to basic container-wide metrics. It's pretty awesome.

Automatic Process Detection

The above two features combine very nicely to allow the Sysdig agent to automatically detect a wide variety of services by their process attributes, simply by having seen a relevant system event flow past on its way to the host kernel. This allows some amazing convenience in monitoring applications since the agent immediately sees when a recognized process has started up - even when it's inside a container.

For example, if you're running a Kafka container, the Sysdig agent will detect the container starting up, see the JVM process start up, notice that the JVM is exposing port 9092, spin up the custom Sysdig Java agent, inject it into the container's namespace, attach directly to the Kafka JVM process (from within the container, mind you), and start collecting some basic JVM JMX metrics (heap usage, GC stats, etc) along with some Kafka-specific JMX metrics - all for free, and without you needing to intervene at all. That's awesome.

StatsD Teleport

I'm not going to dig into this one here since this is already a lengthy post. Just read this post from Sysdig's blog - and be amazed.

Orchestration Awareness Baked In

Orchestration metadata is 100% crucial to monitoring any PaaS or PaaS-like environment. One simply cannot have any legitimate confidence in their understanding of the health of the services running in their stack without being able to trace where any given instance of a service is running where it lives in the larger ecosystem. Sysdig seems to have a strong focus on integration with a number of orchestration mechanisms. If you configure your orchestration integration correctly, then any metric collected via any of the paths will automatically be tagged with ALL of that metadata on its way to the Sysdig back-end. Even better? The agent configuration on individual nodes in a cluster, for instance with Kubernetes, don't need to know that they're part of the cluster, only the masters need to know. The masters will ship orchestration metadata to the back-end, and then when metrics come in from a node that is part of that master's cluster, it will automatically be correlated and immediately visible in the appropriate context. Seriously, that's been making me a VERY happy metrics geek.

To Be Continued (Later)

At some point after this series expounding the ways Sysdig is trying to tackle metrics and monitoring the right way (in my opinion), I'll get around to posting some more technical how-to pieces showing how best to make use of some of these features.

For now, happy hacking!

Monday, March 7, 2016

Deploying a Private PaaS: The Good, the Meh, and the Aw Crap

Moving your organization's dev and prod deployment architecture to a PaaS model can bring a lot of benefits, and a good chunk of these benefits can be realized very quickly by leaning on public PaaS providers such as RedHat's OpenShift Online, Amazon's ECS, or Google's Container Engine. Sometimes though, a public PaaS offering isn't a viable option, possibly due to technical limitations, security concerns, or enterprise policies.

Enter the private PaaS! There are a number of options here, all of which offer varying degrees of feature abundance, technical capability, baked-in security goodies, operational viability, and other core considerations. As with anything in the tech world, the only clear winner among the various tools is the one that best fits your environment and your needs. No matter what you choose, however, there are some key things to consider during evaluation and implementation that span the PaaS field independent of any particular tool.

Let's walk through some of the ups and down of deploying a private PaaS. Please keep in mind that this post isn't about "Private PaaS vs. Public PaaS". Instead it assumes you're in a situation where public cloud offerings aren't a viable option, so it's more about "Private PaaS vs. No PaaS."


The Good

A full listing of all the good things a PaaS can bring would be pretty lengthy, so I'll focus on what I think are the most high-value benefits compared to a "legacy" deployment paradigm.


Increased Plasticity and Reduced Mean Time to Recovery

Plasticity: the degree to which workloads can be moved around a pool of physical resources with minimal elapsed time and resource consumption
Mean Time to Recovery (MTTR): the time taken to recover from a failure, particularly failure of hardware resources or bad configurations resulting in a set of inoperable hosts (at least in the context of this post)

Legacy architectures typically see one application instance (sometimes more, but it's rare) residing on a single (generally static) host. Even with some manner of private IaaS in place, you're still deploying each app instance to a full-blown VM running its own OS. This makes it difficult to attain any reasonable plasticity in your production environments due to the time and resources needed to migrate a compute workload, which in turn forces MTTR upward as you scale up.

PaaS workloads generally eschew virtualization in favor of containerization, which can dramatically reduce the time and resources needed to spin up and tear down application instances. This allows for greatly increased plasticity, which consequently drags MTTR down and helps keep it reasonably low as you scale up.


Reduced Configuration Management Surface Area

Configuration management is not only sufficient for handling large swaths of your core infrastructure, but has effectively become necessary for such. That said, there's a lot of value in reducing the surface area touched by whatever config management tool(s) you're using. A particularly unhealthy pattern that some organizations find themselves following is one in which every application instance host (virtual or not) is "config managed." In the case of bare metal hosting this makes good sense, but in the event that you're deploying to VMs... it's no good. At all. 

Reducing this surface area can act as a major painkiller by requiring much simpler and more consistently applied configuration management... er... configurations. With a PaaS, you only need to automate configuration for the hosts that run the PaaS (and hosts that run non-PaaS-worthy workloads), not the individual application instance containers. This makes life suck a lot less.


Consistency and Control

No one likes an overzealous gatekeeper - they're generally considered antithetical to continuous integration and deployment. On the other hand, an environment that isn't in a position to deploy to a public cloud platform is also very unlikely to be in a position to live without some fairly high level of control over its code deployments. The key here is automated gate-keeping, and a PaaS gives you a decent path to accomplishing this.

Running a private PaaS for both staging and production environments gives you ample opportunity to funnel all code deploys through a consistent pipeline that has sufficient (and ideally minimal-friction) automated controls in place to protect production. This allows Infrastructure Engineers to provide a well-defined and mutually-acceptable contract for getting things into the staging PaaS, and consequently provides a consistent and high-confidence path to production - all without necessitating manual intervention by us pesky humans. Basically, if the developer's code and enveloping container are up to snuff by the staging environment's standards, they're clear to push on to production.

Regarding consistency, developer's also gain confidence in their ability to deploy services into an existing ecosystem by being able to rely on a staging environment that mirrors production with far fewer variables than might otherwise have been present in a legacy architecture. Dependencies should all be continually deployed to staging, and thus assurance of API contracts should be much closer to trivial than not.

Making DevOps a Real Thing

Everyone's throwing around DevOps these days and it's exhausting, I know - but I'm still going to shamelessly throw my definition on the pile.

My current take on DevOps is that an engineering organization most likely contains ops folks who are brilliant within the realm of infrastructure, and developer folks who are brilliant within the realm of application code. If you consider a Venn diagram with two circles - one for infrastructure and one for code - most organizations are likely to see those circles sitting miles apart from one another, or overlapped so heavily as to be indecent. The former diagram could be called "the DevOps desert," and the latter "choked with DevOps". Neither of these are particularly attractive to me.

A well-devised and even-better-implemented PaaS has the potential to adjust that diagram such that the two circles overlap, but only narrowly. Ops people focus hard on infrastructure, and dev people focus hard on code, with a narrow and remarkably well-defined interface between the two disciplines. There's still plenty of room for collaboration and mutually-beneficial contracts, but dev doesn't have to muddy their waters with infrastructure concerns, and ops doesn't have to muddy their waters with code concerns. I think that could be a beautiful thing.

The Meh

There aren't a ton of objectively "bad" things about running a PaaS vs. a legacy architecture, but there are a few necessary unpleasantries to consider.

Introducing Additional Complexity

A PaaS is an inherently complex beast. There are mitigating factors here and the complexity is generally worth it, but it's additional complexity all the same. Adding moving parts to any system is, by default, a risky move. But in today's market and at today's scale, complexity is a manageable necessity - provided that you choose your PaaS wisely and hire truly intelligent engineers over those who can throw out the most buzzwords.

Network configurations

Containers need to be able to route to one another across hosts in addition to being able to reach (and be reachable by) other off-PaaS systems. Most PaaS products handle this or provide good patterns for it, but it still introduces a new layer of networking to consider during troubleshooting, automation, and optimization.

Maintenance and Upgrades

If you've build your PaaS and its integration points well, you'll end up with a compelling deployment target on which lots and lots of folks are going to run their code. This can make it tricky to handle host patching and upgrades without impacting service availability. That staging environment (and maybe even a pre-staging environment solely used for infrastructure automation tests) becomes very important here.

Data Stores

Anything that needs local persistent storage (think databases and durable message buses) are unlikely to be good candidates for PaaS workloads. It can be made to work in some cases, but unless you're a big fan of your SAN, you're likely best to keep these kinds of things off the PaaS. Even if you can make it work, I'm not yet convinced that there is much value in having such a workload reside in such a fluid environment.

Capacity Planning and Resource Limits

Container explosions can bite you very quickly, most likely due to a PaaS' self-healing mechanism glitching out. This is a complex component and you're guaranteed to find really... um... "special"... ways of blowing out your resource capacity until you get a good pattern figured out. A clear pattern for determining and enforcing resource limits is going to be incredibly helpful.

Capacity planning also demands relevant data, which means you'll need some solid metrics, and that brings us to...

The "Aw, Crap..."

There are a couple of problems that a PaaS introduces which will, at some point, demand your attention - and punish you harshly should you neglect them.

Single-Point-of-Failure Avoidance

SPoFs are anathema to high availability. If you're going to introduce something like a PaaS, you would do well to think very hard about every piece of the system and how its sudden absence might impact the larger platform. Is the cluster management API highly available? Then you'll need to stick a load balancer in front of those nodes. What happens if the LB goes down though?

Or what if you're running your PaaS nodes as VMs? That's fine, until you factor in hypervisor affinities - can't afford to have too many PaaS nodes residing on a single hypervisor. Even if you account for node -> hypervisor affinities, can you ensure that your PaaS won't shovel a large portion of a service's containers onto a small subset of nodes that do reside on the same hypervisor? The extra layer of abstraction away from bare metal here is likely to introduce new SLA-impacting failure scenarios that you may not have considered.

You're very likely to be able to mitigate any SPoF you come across, but they're likely to be slightly different in some cases than what you've handled before, and it's worth considering how to avoid them up front.

Metrics and Monitoring

Disclaimer: I obsess over metrics. It's almost unhealthy, really, but hey - it works for me. So it's not too surprising that introducing a PaaS makes me very curious about how one can go about gathering relevant metrics from the various layers involved. There are effectively three layers or classes of metrics you need to concern yourself with in a typical PaaS: host, container, and application.

Host metrics are just as easy as they've ever been, so that's not of much concern. Collection and storage of metrics for semi-static hosts is largely a solved problem.

Container metrics (things like CPU, memory, network, etc. used per container) isn't too terribly difficult to collect, though storage and effective visualization can be more difficult due to the transient nature of containers - particularly when those containers are orchestrated across multiple hosts.

Application metrics (metrics exposed by the actual application process within each container) are potentially a real bear of a problem. Polling metrics out of each instance from a central collection tool isn't too attractive since it's fairly difficult to track transient instances such as those that reside in a PaaS. On the flip side, having each application instance emit metrics to some central collector or proxy is feasible, but storage and visualization are still difficult since you'll inevitably need to see those metrics in the context of orchestration metadata that is unlikely to be available to your application process.

These are not entirely insurmountable problems, but there are definitely not many viable products currently available that have a good grasp on how best to present the data generated within a PaaS. In fact, so far the only "out of the box" solution I've found so far that handles this well is Sysdig Cloud. These folks are onto something pretty awesome, and I plan to elaborate on that in my next post.

Most People Are Doing Containers All Wrong

Containers should run a single process wherever possible, and should be treated as entirely immutable application instances. There should be no need to log into a container and run commands once it's running on the production PaaS. Images should be built layer upon layer, where each layer has been "blessed" by the security and compliance gods. Ignoring these principles is worthy of severe punishment in the PaaS world, and your PaaS will be all too happy to dish out said punishment. Don't go in unarmed.


Summary

Implementing a PaaS in your environment is most likely well worth the effort and risk involved, just so that your organization can push forward into a more scalable and modern architecture. While there are certainly a few potential "gotchas," there's a lot to be gained by going into this kind of project with a level head and enough information to make wise decisions. Just keep in mind that this landscape is changing pretty rapidly, and moving to a PaaS is certainly not a decision to be made lightly - but it's still probably the right decision.