Frequently Asked Questions
- What is your license ?
-
Logisland is an Open Source project licensed under the [Apache License version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
- Where can I get it ?
-
Logisland is published in Maven Central, check out [which extensions](/extensions) you need and just import them in your
pom.xml
to get Logisland. We recommend you start your Logisland experience via our [Getting Started guides](/get-started). - Logisland is beta ?
-
Yes, we consider Logisland beta. However 95% of the features Logisland apps use are running in production
- What is a Logisland extension ?
-
One of the main goals of Logisland is ease of extensibility and to build a vibrant ecosystem. Think of Logisland extensions as
- I already use ELK, why would I need to use Logisland ?
-
Well, at first one could say that that both stacks are overlapping, but the real purpose of the Logisland framework is the abstraction of scalability of log aggregation. If you already have an ELK stack you’ll likely want to make it scale (without pain) in both volume and features ways. Logisland will be used for this purpose as an EOM (Event Oriented Middleware) based on Kafka & Spark, where you can plug advanced features with ease. So you just have to route your logs from the Logstash (or Flume, or Collectd, …) agents to Kafka topics and launch parsers and processors.
- Do I need Hadoop to play with Logisland ?
-
No, if your goal is simply to aggregate a massive amount of logs in an Elasticsearch cluster, and to define complex event processing rules to generate new events you definitely don’t need an Hadoop cluster. Kafka topics can be used as an high throughput log buffer for sliding-windows event processing. But if you need advanced batch analytics, it’s really easy to dump your logs into an hadoop cluster to build machine learning models. Docker container is the easiest way to setup a Logisland job.
- How do I make it scale ?
-
Logisland is made for scalability, it relies on Spark and Kafka which are both scalable by essence, to scale Logisland just have to add more kafka brokers and more Spark slaves. This is the manual way, but we’ve planned in further releases to provide auto-scaling either Docker Swarn support or Mesos Marathon.
- What’s the difference between Apache NIFI and Logisland ?
-
Apache NIFI is a powerful ETL very well suited to process incoming data such as logs file, process & enrich them and send them out to any datastore. You can do that as well with Logisland but Logisland is an event oriented framework designed to process huge amount of events in a Complex Event Processing manner not a Single Event Processing as NIFI does. Logisland is not an ETL or a DataFlow, the main goal is to extract information from realtime data. Anyway you can use Apache NIFI to process your logs and send them to Kafka in order to be processed by Logisland
- Error : realpath not found
-
If you don’t have the
realpath
command on you system you may need to install itbrew install coreutils sudo apt-get install coreutils
- How to deploy Logisland in an Hadoop cluster ?
-
When it comes to scale, you’ll need a cluster. Logisland is just a framework that facilitates running sparks jobs over Kafka topics so if you already have a cluster you just have to get the latest Logisland binaries and unzip them to a edge node of your hadoop cluster.
git clone git@github.com:Hurence/{project-name}.git cd {project-name}-0.9.5 mvn clean install -DskipTests
# output file is logisland-assembly/target/logisland-*-bin.tar.gz
- how to handle long running jobs ?
-
Please read this excellent article on spark long running job setup