Last week, one of the server admins contacted me that the disk of our ElasticSearch instance was filling up quite fast. He asked me if this was normal? :-) Short answer, no. Let's investigate the issue together...
I started by opening up Kibana and opened Stack Management through the menu on the left.
On the Stack Management page, I clicked on Index Management.
Here I can see the list of all indices. I sorted on Storage size and noticed that MetricBeats was causing the issue. That is strange as I know for sure that I had configured a lifecycle policy that would cleanup old data.
But then I noticed that the health of one of the indices is ‘yellow’. Let’s see what causing this by executing the following request:
GET /.ds-metricbeat-8.2.0-2022.12.03-000011/_ilm/explain?human
This returns the following result:
Notice line 24.
The problem was that the index was configured to have at least one replica. As we are running a single node cluster in development, the index will never turn green.
As the index lifecycle policy only kicks in after the index is in good health, no data was purged from the system.
To fix it, I updated the ES settings to use no replicas:
PUT /_settings
{
"index":{
"number_of_replicas": 0
}
}