Skip to main content

Postgresql: don't rely on autovacuum

I have a pg database that just suffered through a multiple-hour vacuum of one of its tables. It was pretty painful. Autovacuum is configured to run every 12 hours, but for whatever reason it didn't see fit to vacuum this very busy table for substantially longer than that. (Not sure exactly how long. Over a week is my guess.)

The problem is that one size does not fit all. On smaller tables, the default autovacuum_vacuum_scale_factor of 0.4 is just fine. On this table waiting that long is unacceptable.

Comments

Jonathan Ellis said…
"I gather that you can do _some_ database operations while vacuuming in Postgres 8.x?"

Yes, you only need an exclusive lock for a full vacuum, so avoid those unless absolutely necessary. But that's been the case since they split vacuum into "normal" and "full." (7.2?)

I do seem to remember seeing vacuum stuff in the 8.x changelogs, but I don't remember details. 7.3 is hella ancient at this point... There's lots of good reasons to upgrade b/s vacuum.
Jonathan Ellis said…
"It's just that soft vacuum didn't shrink the database size _at all_."

No, it just marks the space as ready for re-use. So you should see it hold steady at "max space ever used by this table" after a couple days.
Jonathan Ellis said…
I remember something like this with, I think, 7.2. I'd suggest upgrading, I haven't seen this in a long time.
Unknown said…
I know this post is a year old, so you've probably already discovered this, but you can fine-tune autovac parameters for particular tables by putting entries in the pg_autovacuum table. For example

INSERT INTO pg_autovacuum (oid, enabled, vac_base_thresh, vac_scale_factor, anl_base_thresh, anl_scale_factor, vac_cost_delay, vac_cost_limit) VALUES (
(SELECT oid FROM pg_class WHERE relname = 'bigtable'), 't'::boolean, 1000, 0.0, 0.0, 0.01);
Which means vacuum bigtable after 1000 rows have been obsoleted, and analyze it after 1% of rows have changed.

For details, please Read The Fine Manual. Actually, I'm continually surprised that people have problems with this since it's pretty well documented. Perhaps we should err in the direction of being more aggressive with the avd settings by default.

jarno, when you did the pg_dump from 7.3 to load into 8.0 did you follow the explicitly documented requirement to use the 8.0 pg_dump binary?
Jonathan Ellis said…
Yeah, I did discover pg_autovacuum. The UI could use some work there. :)

Thanks for the comment.

Popular posts from this blog

The Missing Piece in AI Coding: Automated Context Discovery

I recently switched tasks from writing the ColBERT Live! library and related benchmarking tools to authoring BM25 search for Cassandra . I was able to implement the former almost entirely with "coding in English" via Aider . That is: I gave the LLM tasks, in English, and it generated diffs for me that Aider applied to my source files. This made me easily 5x more productive vs writing code by hand, even with AI autocomplete like Copilot. It felt amazing! (Take a minute to check out this short thread on a real-life session with Aider , if you've never tried it.) Coming back to Cassandra, by contrast, felt like swimming through molasses. Doing everything by hand is tedious when you know that an LLM could do it faster if you could just structure the problem correctly for it. It felt like writing assembly without a compiler -- a useful skill in narrow situations, but mostly not a good use of human intelligence today. The key difference in these two sce...

A week of Windows Subsystem for Linux

I first experimented with WSL2 as a daily development environment two years ago. Things were still pretty rough around the edges, especially with JetBrains' IDEs, and I ended up buying a dedicated Linux workstation so I wouldn't have to deal with the pain.  Unfortunately, the Linux box developed a heat management problem, and simultaneously I found myself needing a beefier GPU than it had for working on multi-vector encoding , so I decided to give WSL2 another try. Here's some of the highlights and lowlights. TLDR, it's working well enough that I'm probably going to continue using it as my primary development machine going forward. The Good NVIDIA CUDA drivers just work. I was blown away that I ran conda install cuda -c nvidia and it worked the first try. No farting around with Linux kernel header versions or arcane errors from nvidia-smi. It just worked, including with PyTorch. JetBrains products work a lot better now in remote development mod...

Python at Mozy.com

At my day job, I write code for a company called Berkeley Data Systems. (They found me through this blog, actually. It's been a good place to work.) Our first product is free online backup at mozy.com . Our second beta release was yesterday; the obvious problems have been fixed, so I feel reasonably good about blogging about it. Our back end, which is the most algorithmically complex part -- as opposed to fighting-Microsoft-APIs complex, as we have to in our desktop client -- is 90% in python with one C extension for speed. We (well, they, since I wasn't at the company at that point) initially chose Python for speed of development, and it's definitely fulfilled that expectation. (It's also lived up to its reputation for readability, in that the Python code has had 3 different developers -- in serial -- with very quick ramp-ups in each case. Python's succinctness and and one-obvious-way-to-do-it philosophy played a big part in this.) If you try it out, pleas...