This message was deleted.
# nebula
s
This message was deleted.
u
Checking leader distribuition first to make sure it's balanced may help.And check rocksdb's block cache. Please let me know your data size, and the crashed storage's memory usage.
r
The space has 100 partitions.
Copy code
+--------------+------+-----------+----------+--------------+---------------------+------------------------+---------+
| Host         | Port | HTTP port | Status   | Leader count | Leader distribution | Partition distribution | Version |
+--------------+------+-----------+----------+--------------+---------------------+------------------------+---------+
| "10.0.1.8"   | 9779 | 19779     | "ONLINE" | 33           | "MDB:33"            | "MDB:33"               | "3.3.0" |
| "10.0.1.95"  | 9779 | 19779     | "ONLINE" | 33           | "MDB:33"            | "MDB:33"               | "3.3.0" |
| "10.0.1.252" | 9779 | 19779     | "ONLINE" | 34           | "MDB:34"            | "MDB:34"               | "3.3.0" |
+--------------+------+-----------+----------+--------------+---------------------+------------------------+---------+
Vertex and Edge Counts:
Copy code
| "Space" | "vertices"    | 4360152   |
| "Space" | "edges"       | 195535793 |
Disk usage is around 50G each across hosts. RAM usage shoots over 250Gb.
u
That's weired. What kind of queries were you excuting then? A skewed big query is possible but would hardly happen since data is hashed now. You can also check current config from http interface to see if it's expected.
r
I am not yet executing any queries. My concern was with respect to the start up. I was expecting that
enable_partitioned_index_filter=true
will reduce the RAM requirement, which performing as expected in 2 hosts of the cluster (Max. RAM usage: 10-12gb), but the host with
Leader Count 34
tries to use more than 250Gb.
101 Views