Installing and Configuring a Micro Elasticsearch Cluster in the Cloud

This is part 2 of a series detailing some of the ways you can setup your own Elasticstack in the cloud for your own personal use.

  1. Building Your Environment in AWS
  2. Setting up and Installing Elasticsearch
  3. Setting up Kibana
  4. Using a Proxy for Kibana with HAProxy
  5. Enabling Security and Using Password Authentication
  6. Making Kibana Internet Accessible with Cloudfront
  7. Securing Cloudfront with Security Groups
  8. Inserting Data into Elasticsearch with Logstash

Step 1: Setup Instance and OS

I am going to be using two t3a.micro with Centos 7 to keep them nice and cheap. They have the default 8GB root drive plus an extra EBS 10GB drive for the Elasticsearch storage. I prefer to keep my data storage separate from the OS, it helps in managing it, backups and deletion.

First start by ensuring your OS is up to date. Run an update if you haven’t already.

sudo yum update

I will be using Vim throughout these guides, so install it if you fancy.

sudo yum install vim

You will need to attach the extra storage to the host first as it will not be usable by default.

See here about adding the extra disks.

Step 2: Install Elasticsearch

Check the official documentation for information about what we are doing here.

I am installing my cluster on 2 nodes, you can do the same on 1 or 3 or 20 etc. Repeat these steps for each node you are installing Elasticsearch on.

Start by running a few commands to setup the repository. First install the key:

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a repo file with the below command:

sudo vim /etc/yum.repos.d/elasticsearch.repo

And paste in the following information: (May change in later versions)

[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md

Now save the file, and run the install command:

sudo yum install --enablerepo=elasticsearch elasticsearch

Repeat this for each node you are installing Elasticsearch on. Once you have finished installing everything, we need to make a few config changes.

Step 3: Edit the Elasticsearch Config

Be sure to repeat this for each node in your cluster.

If you have mounted an extra drive to your server, start by adding a folder for Elasticsearch to store its data in.

sudo mkdir /data/elasticsearch

sudo chown elasticsearch:elasticsearch /data/elasticsearch
elasticsearch.yml Config

Now edit the elasticsearch.yml config file:

sudo vim /etc/elasticsearch/elasticsearch.yml

I use the below config for my own cluster:

# elasticsearch.yml config
cluster.name: finance-cluster
node.name: els-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.34.85.15
http.port: 9200
discovery.seed_hosts: ["172.34.85.15", "172.34.90.13"]
cluster.initial_master_nodes: ["els-1", "els-2"]
  1. Give your cluster a name.
  2. Give your node a name. This is unique for each node.
  3. Ensure you set path.data to the /data/elasticsearch directory.
  4. Set the logs somewhere else if you have a preferred logging location.
  5. Set the correct IP address for this node.
  6. Set the port. (It is 9200 by default but I like to be specific)
  7. Set the IP’s of all the other nodes in the cluster.
  8. Set the nodes that are allowed to be considered master.
jvm.options for Elasticsearch

We need to update the JVM options in Elasticsearch if we are running something as small as a t3a.micro. Normally Elasticsearch will not start on a server with less than 2GB of ram. (It won’t start if you don’t have twice the memory it is configured to run at). So we need to change this:

sudo vim /etc/elasticsearch/jvm.options

Near the top, change:

-Xms1g
-Xmx1g

To:

-Xms512m
-Xmx512m

This sets the memory to half the memory on the t3a.micro instance, and allows Elasticsearch to start.

Step 4: Add Swap to the Server

I know, the official documentation states you should never run your Elasticsearch with swap enabled. I also know the official AWS documentation recommends you do not use swap on AWS instances. We are going to do it anyway.

The simple reason is that Elasticsearch is likely to crash trying to run on 512MB of memory. If we add swap it prevents crashing, without actually using much more than 512MB. It buffers against spikes in queries and indexation.

I have been running my personal cluster with 512MB and swap, and it doesn’t have any noticeable performance issues as I am the only user.

See here for information on adding swap.

Alternatively skip this and add more ram, or try your luck. If you do have success, let me know what you did in the comments, or via Contact.

Step 5: Start Your Brand New Elasticsearch Cluster

Now that you are done installing and configuring each node, its time to set them to always startup on boot. On each server run:

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service

And finally start it with:

sudo systemctl start elasticsearch.service

It will probably take quite a while to start up. You can check the current startup logs of your Elasticsearch nodes via:

sudo journalctl -xef --unit elasticsearch

And you can check the cluster log via:

less +F /var/log/elasticsearch/finance-cluster.log

One it starts you can check the status of the cluster via:

curl -X GET "<ipaddress>:9200/_cluster/health?timeout=50s&pretty"

It should return something like this:

{
  "cluster_name" : "finance-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 2,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

And there you have it. Your own brand new personal Elasticsearch cluster. Check out Part 3: Setting up a Small Kibana Dashboard, where we setup Kibana so you can see your cluster in action.


Any thoughts, concerns, mistakes? Let me know in the comments or via the Contact page.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s