500 B documents
1) JBOD over RAId
2) unicast discovery
3) men for JVM heap + fs cache
4) tune kernel params, user/process/network limits
5) JVM might corrupt data in es

6) tune JVM params, network/connectivity, recovery params, gateway params, caching params,

7) tribe node of greater than 150 node
8) refresh interval =-1
9) bulk index thread pool
10) disable _all
11) explain/validate queries
12) search templates
13) high cardinality fields – disable aggregation/sorting
14) do search on client nodes
15) monitoring – nagios
16) upgrade es during upgrade
17) 600-700 TB.
18) reindexing faster than recovery
19) bulk index while replica is zero
20) what if you kill master
21) field level statistic
22) merge count

image

1) field data and doc values

1) suro netflix (equivalent to herd), asgard ( netflix oss),
2) search nodes
3) Apollo discovery plugin open source – raigad does it
4) ram/2 for es, use jstat and check JVM
5) unbounded bulk indexing, unless heap error
6) file descriptor limit increase