Needless to say, I'm a bit obsessed with stats myself. Without a nigh-unmanageable volume of data points with which I can paint pictures of what's going on in my stack, I start to feel like a kid at Christmas with a mountain of presents and no name tags. That said, I'm not picky about how I get those stats or where they're stored. I have some opinions, sure, but if I can make use of what I've got and get what I need, I'm fine.
The problem is that all the tools that exist to collect those stats are extremely opinionated. In fact, almost every stats collection tool out there today:
- comes as part of a larger metrics aggregation and analysis suite / platform
- comes without the bits necessary to grab basic system stats (cpu % util, disk and network IO, mem util, etc)
- is not easily extensible to gather custom "non-system" stats (redis, JMX, apache, you name it)
- is a nightmare to build / install / configure
- makes heavy-weight assumptions about where (and in what format) stats are going to be shipped
- does all of the above
This becomes a hindrance when you try to provide multiple teams in an organization with the flexibility to manage their tools and data in their own way. Teams are left to one of a few unpleasant choices:
- conform to whatever stats collection tool the InfraNerds are using
- use whatever stats collection tool they want and:
- leave the InfraNerds blind (read: "induce much pain")
- stick the InfraNerds with the task of integrating multiple tools
- deploy multiple stats collection tools to feed the backend they like in addition to the one used by the InfraNerds
This sucks. A lot.
So, Stat Badger is my attempt to avoid such issues entirely. It imposes zero opinions on your stats and their destinations. The core loop gathers no stats on its own, and defines no outputs on its own. Instead, those decisions are made by the user, by way of defining one or more Modules and Emitters.
The basic philosophy of Stat Badger is that stats should be a commodity - a raw "material" that can be extracted in large volumes and sent to any place a consumer might want it - intended to be manipulated, refined, and produced by any number of processes into myriad useful products and services.
Now, even though Stat Badger makes no assumptions about your stats, it does ship with a standard set of modules and emitters to get you started. Specifically, it ships with modules to gather basic detailed system stats (cpu, memory, network, disk, load, and per-process memory / cpu... so far), and emitters to spit stats out to a number of back-ends (InfluxDB 0.8, Graphite, Kafka, stdout... so far).
Getting started should be as simple as:
git clone https://github.com/cboggs/stat-badger
cd stat-badger
python badger_core.py -f config.json
This should start up a foreground process that spits pretty-fied JSON to stdout. For more interesting experiments, edit config.json and add to the list of emitters to ship your data to InfluxDB, Graphite, Kafka, or all of the above - all at the same time.
Stat Badger is not yet a fully-polished product. It's lacking the required wiring for tests (I'm not a dev by trade, so I've been cheating so far). It's got a few limitations (addressed in the "More to Come" section of the readme in Github). It also needs more modules and emitters added to make it as universal as I envision.
All that said, I hope you like Stat Badger, and I hope even more so that you'll contribute and help make it a strong, solid tool for stats gathering!