Skip to content

HIT-IBMTC/logsearch-boshrelease

 
 

Repository files navigation

Logsearch

Added x-pack plugin to the original cloudfoundry-community/logsearch-boshrelease.

About the default templates:

  1. you need to add - x-pack: /var/vcap/packages/x-pack/x-pack-5.5.2.zip to logsearch-for-cloudfoundry/templates/stub.xxxxx.yml properties/kibana/plugins to make x-pack installed in Kibana
  2. monitoring security ml graph are all DISABLED and NOT tested, because I only need watcher with email notification.
  3. kibana.health.disable_post_start is true in the default template, after a successful bosh deploy, you may still need to wait for several minutes until Kibana finishes its optimization.

To build release:

  1. ./script/prepare
  2. bosh create release --force --with-tarball
  3. bosh upload release

--

A scalable stack of Elasticsearch, Logstash, and Kibana for your own BOSH-managed infrastructure.

BREAKING CHANGES

Logsearch < v23.0.0 was based on Elasticsearch 1.x and Kibana 3.

Logsearch > v200 is based on Elasticsearch 2.x and Kibana 4.

There is NO upgrade path from Elasticsearch 1.x to 2.x. Sorry :(

Logsearch > v204.0.0 is based on Elasticsearch 5.x and Kibana 5.

For upgrade procedure from Elasticsearch 2.x please refer to v205.0.0 release notes.

Getting Started

This repo contains Logsearch Core; which deploys an ELK cluster that can receive and parse logs via syslog that contain JSON.

Most users will want to combine Logsearch Core with a Logsearch Addon to customise their cluster for a particular type of logs. Its likely you want to be following an Addon installation guides - see below for a list of the common Addons:

If you are sure you want install just Logsearch Core, read on...

Installing Logsearch Core

  1. Upload the latest logserach release

    • Download the latest logsearch release

      NOTE: At the moment you can get working logsearch release by cloning Git repository and creating bosh release from it.

      Example:

      $ git clone https://github.com/cloudfoundry-community/logsearch-boshrelease.git
      $ cd logsearch-boshrelease
      $ bosh create release
    • Upload bosh release

      Example:

      $ bosh upload release
  2. Customise your deployment stub:

    • Make a copy of templates/stub.$INFRASTRUCTURE.example.yml to stub-logsearch.yml

      Example:

      $ cp logsearch-boshrelease/templates/stub.openstack.example.yml stub-logsearch.yml
    • Edit stub-logsearch.yml to match your IAAS settings

  3. Generate a manifest with scripts/generate_deployment_manifest $INFRASTRUCTURE stub-logsearch.yml > logsearch.yml

    Example:

    $ logsearch-boshrelease/scripts/generate_deployment_manifest openstack stub-logsearch.yml > logsearch.yml

    Notice logsearch.yml generated.

  4. Make sure you have these 2 security groups configured:

    • bosh which allow access from this group itself

    • logsearch which allow access to ports 80, 8080, 8888

  5. Deploy!

    $ bosh -d logsearch.yml deploy

Common customisations:

  1. Adding new parsing rules:

     logstash_parser:
       filters: |
          # Put your additional Logstash filter config here, eg:
          json {
             source => "@message"
             remove_field => ["@message"]
          }
    

Release Channels

  • The latest stable, final release will be soon available on bosh.io
  • develop - The develop branch in this repo is deployed to our test environments. It is occasionally broken - use with care!

Known issues

VMs lose connectivity to each other after VM recreation (eg. instance type upgrade)

While this issue is not specific to this boshrelease, it is worth noting.

On certain IAAS'es, (AWS confirmed), the bosh-agent fails to flush the ARP cache of the VMs in the deployment which, in rare cases, results in VMs not being able to communicate with each other after some of them has been recreated. The symptoms of when this happens are varied depending on the affected VMs. It could be anything from HAproxy reporting it couldn't find any backends (eg. Kibana) or the parsers failing to connect to the queue.

To prevent stale ARP entries, set the director.flush_arp property of your BOSH deployment to true.

The issue, if occurs, should fix itself as the kernel updates incomplete ARP entries, which should happen within minutes

This can also be done manually if an immediate manual fix is preferred. This should be done on the VMs that are trying to talk to the VM that has been recreated.

arp -d $recreated_vm_ip

License

Apache License 2.0

Packages

No packages published

Languages

  • Shell 43.6%
  • Ruby 39.9%
  • HTML 14.6%
  • Go 1.9%