In the olden days, pulling log data and metrics from different sources (computers, networks, applications...) often required ACTUALLY pulling out log data and metrics from different sources. We had to programmatically configure and/or request that different systems first generate the data...then, we had to move this data from one place to another. Cron jobs, ftp, hot folders and programming tricks were key gears pushing our data conveyor belts. Getting all the data we wanted in a timely fashion was a holy grail indeed.
Ever seen a Rube Goldberg machine? Yeah, that's how old school monitoring data flow works...
As we know, Elastic has aimed to simplify this endeavor. Enter "Beats": Part of Elastic's monitoring automation platform. How many beats do you need? Several, probably...one beat per data shipment.
So, what can be beaten in this fashion? As Elastic says, beats provides "All kinds of shippers for all kinds of data": https://www.elastic.co/products/beats
In this meetup, Elastic's Mike Heldebrandt will present and answer questions about File beat and Metric beat modules. He's seen many organizations putting beats into action, so it's a great chance to hear about the state of the art.
As concepts evolve for modern methods to transfer, ingest and convert or process raw log data, we Elastic enthusiasts need to know how to apply the different components of the stack. What is your overall reporting requirement from the log data? Where do you get the data, and what is your indexing strategy? Chances are, Beats will figure fundamentally into that conversation.
Food and Soda will be supplied by Elastic.