Chat with your data: Unternehmensdaten als Basis für einen eigenen KI-Assistenten nutzen.
Zum Angebot 
The stylized L as in Loki on a blue gradient
Data Engineering

Grafana Loki: Scalable and Flexible Logfile Management

Lesezeit
4 ​​min

Notice:
This post is older than 5 years – the content might be outdated.

Right now there are three popular platforms to build a scalable and flexibel logfile management solution on-premise: splunk, elastic stack and graylog. Most customers tend to build on top of the elastic stack as its core software components are open source and therefore accessible without licensing costs. Addionally, elastic provides lots of features to build a good solution.

As the elastic stack contains various components (elasticsearch, kibana, logshipper from the beats familiy) it is far from a lightweight solution. Planing, sizing and operating a full-blown elastic stack might be complex—just like the other solutions mentioned above.

Another frequently asked question is how to integrate these three logfile management solutions into existing tool stacks, since they provide their own frontends, respectively.

As the prometheus stack is spreading into lot of companies we’re facing questions like: „How can I integrate this new logfile management solution into our existing tooling?“ and „how can I view my logfile events in grafana?“. Although Grafana provides datasources for elasticsearch, splunk and graylog in its core it, is metric-driven. So it’s not the best tool to display log events.

Or rather: It wasn’t the optimal tool to display log data.

Enter: Loki

As of mid December 2018 there is a new option: Grafana released an alpha version of Loki, their approach to logfile management—and it integrates seamlessly into Grafana.

Loki is a logfile aggregator that collects log streams. It does so by storing log streams as well as labels attached to them. Loki works like Prometheus but for logs. Each log stream is indexed and its occurrence is tracked via a timestamp.

With the yet-to-release Grafana 6.0 there will be a new feature called „explore“ that enables to view the collected log streams. A demo can be found here.

Architecture

Lets take a look at the overall architecture:

Right now the only logshipper that is able to send data to Loki is promtail. Promtail is part of the loki-project and also an alpha release.

Loki stores the log streams on a local disk. They are indexed via BoltDb while the raw data is organized in chunks. Again: Like Prometheus.

Differences to other solutions

The project proclaims to be different from other logfile solutions. It

  • does not do full text indexing on logs. By storing compressed, unstructured logs and only indexing metadata, Loki is simpler to operate and cheaper to run.
  • indexes and groups log streams using the same labels you’re already using with Prometheus, enabling you to seamlessly switch between metrics and logs using the same labels that you’re already using with Prometheus.
  • is an especially good fit for storing Kubernetes Pod logs. Metadata such as Pod labels is automatically scraped and indexed.
  • has native support in Grafana (already in the nightly builds, will be included in Grafana 6.0).

(source)

Assessment

Loki could evolve into a good alternative  to provide a lightweighted logfile management solution that integrates very easily into existing prometheus tooling.

A simple Loki setup is done pretty fast and is easy to operate. Time will tell how Loki works with an increasing amount of data. Yet, there are some limitations within the promtail logshipper as of now:

  • No option to drop events.
  • No option to mutate events, e.g. anonymize a session id.
  • No option to parse the entry data, e.g. extract the original timestamp within the log event.

It will be interesting to watch if other logshipping solutions will adopt to Loki. If for example fluentd provided a Loki output plugin a lot of the limitations above would be mitigated.

Further reading

Read all the details at the project’s github page.

 

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert