Jump to content

BIG-IQ disk out of space


Cowboy Denny
 Share

Recommended Posts

Here are a few things I've done in the past that helped me get through the BIG-IQ out of space.

K18484011: BIG-IQ Reclaim unused disk space from postgres database

Do a Full vacuum and reindex to reclaim the space
Note: This causes a high load and locks individual tables while working on them. It is recommended that this be performed during a maintenance window.

Do a full vacuum:
psql -U postgres -d bigiq_db -c "VACUUM FULL;"

After the above finishes reindex:
psql -U postgres -d bigiq_db -c "REINDEX DATABASE bigiq_db;"

 

When running a pre-check on the BIG-IQ CM I get this error

* Checking large task record accumulations
***************************************************************************************
The following tables have over 100k records:
 bigiqtasks.global_tasks_log_pool_mgmt | Number of records: 123533


Please reduce # of records to below 100000 in each table before upgrading.
To remove by date, use the following example:
psql -U postgres -d bigiq_db -c "DELETE FROM <schema_name.table_name> WHERE _value->>'endDateTime' < <dateTime ie. '2021-01-25T16:41:04.740-0800'> AND _value->>'status' IN ('FINISHED', 'FAILED');"
This will remove all tasks that has an endDateTime before January 25th 2021 at 16:41 PST where status is FINISHED or FAILED.
 

It appears easy enough but the dang command doesn't work.

 

 

 

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...