Did you tested compression change on already popul...
# nebula-users
g
Did you tested compression change on already populated database and is there need to execute some compaction or some other action to have data available for query ?
w
The compression algo change shouldn’t impact the query, have you got a chance to trigger compaction?
g
I use only automatic compactions, not forcing with explicit command for compaction.
I found that issue I have is related to Index which doesn’t return all records matching index using LOOKUP, but FETCH-ing data by VertexID returns that record.
w
Checked with storage guys, and it looks like auto-compaction time out due to the compression-algorithm change. Could you trigger a manual compaction?
g
I can, but how it will help with that indexes problem, that LOOKUPs not having all data in index ?
I’ve noticed that executing query with WHERE clause with ‘==’ or STARTS WITH ‘abcdefgh’ didn’t return all rows of data. Then I experimented with STARTS WITH ‘abcdef’ and got some larger number of rows, then repeating query with STARTS WITH ‘abcdefgh’ returns all rows which match the where clause. Looks like “repair on read” but don’t know what actually happened. I can see some activity on storage which produce some increased reads/writes like some background process, started after I reverted changes for leveled compression parameter and restarted storaged services on all nodes on cluster.
w
It’s 3.3.0 or? what is the storage client timeout configuration in your cluster?
g
Yes it is 3.3.0
Copy code
# storage client timeout
--storage_client_timeout_ms=60000
w
Dear @Goran Cvijanovic let’s continue the thread in DM here to enable involving other guys later. Timed out in 60 secs is quite abnormal, may I know if it’s HDD or SDD?
The log level was v4 by default or you just switched it so now?