Courier Fetch 5 Of 85 Shards Failed

broken image


Posted in: Cloud, DevOps, Open Source, Site Reliability Engineering, Technical Track

Match an Array Value¶. If the specified is an array, MongoDB matches documents where the matches the array exactly or the contains an element that matches the array exactly. The order of the elements matters. For an example, see Equals an Array Value. Amazon Elasticsearch Service の Kibana で発生する、「Courier fetch: n of m shards failed」エラーの解決方法を教えてください。 最終更新日: 2019 年 8 月 23 日 Amazon Elasticsearch Service (Amazon ES) ドメインにある Kibana でダッシュボードをロードしようとすると、「Error: Courier Fetch: 5. This is probably due to Kibana trying to load an empty index and therefor missing 5 shards (ES creates 5 shards for an index by default). I stumbled upon the same issue as Logstash created empty indices in Elasticsearch under certain circumstances.

Amazon Elasticsearch Service is a managed service to implement Elasticsearch in AWS. Underlying instances are managed by AWS and interaction with the service is available through API and AWS GUI.

Kibana is also integrated with Amazon Elasticsearch Service. We came across an issue which caused Kibana4 to show the following error message, when searching for *.

Error is not very descriptive.

As Amazon Elasticsearch service is an endpoint only and we do not have direct access to the instances. We also have access to few API tools.

We decided to see what can be found from the chrome browser.

The Chrome Developer Tools (DevTools) contains lots of useful debugging possibilities.

DevTools can be started using several methods.

1. Right click and click Inspect.
2. From Menu -> More Tools -> Developer Tools
3. Press F12

Network tab under DevTools can be used to debug wide variety of issues. It records every requests made when a web page is loading. It captures wide range of information about every request like HTTP access Method, status and time took to complete the request etc.

By clicking on any of the requested resource, we will be able to get more information on the request.

In this case, the interesting bit was under the Preview tab. The Preview tab captures the data chrome got back from the search and store it as objects.

A successful query would look like the image below captured from Kibana3 of public website logstash.openstack.org.

We checked '_msearch?timeout=3000.' and received following errors messages under the nested values (For example 'responses' -> '0' -> '_shards' -> 'failures' -> '0')

{index: 'logstash-2016.02.24', shard: 1, status: 500,…}index: 'logstash-2016.02.24″reason: 'RemoteTransportException[[Leech][inet[/10.212.25.251:9300]][indices:data/read/search[phase/query]]]; nested: ElasticsearchException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: UncheckedExecutionException[org.elasticsearch.common.breaker.CircuitBreakingException: [FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; nested: CircuitBreakingException[[FIELDDATA] Data too large, data for [@timestamp] would be larger than limit of [5143501209/4.7gb]]; 'shard: 1status: 500

So the issue is clear, fielddata usage is above the limit.

As per Amazon documentation,

Field Data Breaker –
Percentage of JVM heap memory allowed to load a single data field into memory. The default value is 60%. We recommend raising this limit if you are uploading data with large fields.
indices.breaker.fielddata.limit
For more information, see Field data in the Elasticsearch documentation.

Following url documents the supported Amazon Elasticsearch operations.

Courier Fetch 5 Of 85 Shards Failed Raid

On checking the current heap usage (second column) of the data nodes, we can see that heap usage is very high,

$ curl -XGET 'https://elasticsearch.abc.com/_cat/nodes?v'
host ip heap.percent ram.percent load node.role master name
x.x.x.x 10 85 0.00 – m Drax the Destroyer
x.x.x.x 7 85 0.00 – * H.E.R.B.I.E.
x.x.x.x 78 64 1.08 d – Black Cat
x.x.x.x 80 62 1.41 d – Leech
x.x.x.x 7 85 0.00 – m Alex
x.x.x.x 78 63 0.27 d – Saint Anna
x.x.x.x 80 63 0.28 d – Martinex
x.x.x.x 78 63 0.59 d – Scorpio

Following command can be used to increase the indices.breaker.fielddata.limit value. This can be used as a workaround.

$ curl -XPUT elasticsearch.abc.com/_cluster/settings -d ‘{ 'persistent' : { 'indices.breaker.fielddata.limit' : '89%' } }'

Do a print screen on mac. Movavi video editor 15 4 1 activation key crack. Running the command allowed the kibana search to run without issues and show the data.

The real solution would be to increase the number of nodes or reduce the amount of field data that need to be loaded by limiting number of indexes.

AWS Lamda can be used to to run a script to cleanup indices as a scheduled event.

Interested in working with Minto? Schedule a tech call.

Prerequisites:

https://6th-grade-social-studies-lesson-1soft-ala.peatix.com. JAVA SE installation: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

https://occupyfreeapeaksoftblurayplayer118t.peatix.com. Homepage download for ELK product:

Raw power 2 1 30. https://www.elastic.co/downloads

Elasticsearch: https://www.elastic.co/downloads/elasticsearch

Logstash: https://www.elastic.co/downloads/logstash

Kibana: https://www.elastic.co/downloads/kibana

Fetch

Versions: Elasticsearch 5.0.0, Logstash 5.0.0, Kibana 5.0.0

Download last version of each.

Installation of Elasticsearch:

Extract the zip file

Go to C:UsersAdministratorDownloadselasticsearch-5.0.0elasticsearch-5.0.0bin

And execute the .batch script

Courier Fetch 5 Of 85 Shards Failed Version

Elasticsearch 5.0.0 is running in http://localhost:9200/

You can access it using a shell command (babun, cygwin, linux, …)

I am using Cygwin shell.

On cygwin try:

$ curl.exe http://localhost:9200/
{
« name » : « CVY5VVn »,
« cluster_name » : « elasticsearch »,
« cluster_uuid » : « E110diEFTheAeRwyvZkBkQ »,
« version » : {
« number » : « 5.0.0 »,
« build_hash » : « 253032b »,
« build_date » : « 2016-10-26T04:37:51.531Z »,
« build_snapshot » : false,
« lucene_version » : « 6.2.0 »
},
« tagline » : « You Know, for Search »
}

Userguide: https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html

Installation of Kibana:

Extract the zip file

Go to C:UsersAdministratorDownloadskibana-5.0.0-windows-x86kibana-5.0.0-windows-x86bin

Launch kibana

From your browser go to http://localhost:5601/

Intsallation of Logstash:

Extract the zip file

Go to C:UsersAdministratorDownloadslogstash-5.0.0logstash-5.0.0bin

Laravel official admin. Create a logstash.conf file with the following config:





broken image