Skip to main content

Postgresql: don't rely on autovacuum

I have a pg database that just suffered through a multiple-hour vacuum of one of its tables. It was pretty painful. Autovacuum is configured to run every 12 hours, but for whatever reason it didn't see fit to vacuum this very busy table for substantially longer than that. (Not sure exactly how long. Over a week is my guess.)

The problem is that one size does not fit all. On smaller tables, the default autovacuum_vacuum_scale_factor of 0.4 is just fine. On this table waiting that long is unacceptable.

Comments

Jonathan Ellis said…
"I gather that you can do _some_ database operations while vacuuming in Postgres 8.x?"

Yes, you only need an exclusive lock for a full vacuum, so avoid those unless absolutely necessary. But that's been the case since they split vacuum into "normal" and "full." (7.2?)

I do seem to remember seeing vacuum stuff in the 8.x changelogs, but I don't remember details. 7.3 is hella ancient at this point... There's lots of good reasons to upgrade b/s vacuum.
Jonathan Ellis said…
"It's just that soft vacuum didn't shrink the database size _at all_."

No, it just marks the space as ready for re-use. So you should see it hold steady at "max space ever used by this table" after a couple days.
Jonathan Ellis said…
I remember something like this with, I think, 7.2. I'd suggest upgrading, I haven't seen this in a long time.
Unknown said…
I know this post is a year old, so you've probably already discovered this, but you can fine-tune autovac parameters for particular tables by putting entries in the pg_autovacuum table. For example

INSERT INTO pg_autovacuum (oid, enabled, vac_base_thresh, vac_scale_factor, anl_base_thresh, anl_scale_factor, vac_cost_delay, vac_cost_limit) VALUES (
(SELECT oid FROM pg_class WHERE relname = 'bigtable'), 't'::boolean, 1000, 0.0, 0.0, 0.01);
Which means vacuum bigtable after 1000 rows have been obsoleted, and analyze it after 1% of rows have changed.

For details, please Read The Fine Manual. Actually, I'm continually surprised that people have problems with this since it's pretty well documented. Perhaps we should err in the direction of being more aggressive with the avd settings by default.

jarno, when you did the pg_dump from 7.3 to load into 8.0 did you follow the explicitly documented requirement to use the 8.0 pg_dump binary?
Jonathan Ellis said…
Yeah, I did discover pg_autovacuum. The UI could use some work there. :)

Thanks for the comment.

Popular posts from this blog

Why schema definition belongs in the database

Earlier, I wrote about how ORM developers shouldn't try to re-invent SQL . It doesn't need to be done, and you're not likely to end up with an actual improvement. SQL may be designed by committee, but it's also been refined from thousands if not millions of man-years of database experience. The same applies to DDL. (Data Definition Langage -- the part of the SQL standard that deals with CREATE and ALTER.) Unfortunately, a number of Python ORMs are trying to replace DDL with a homegrown Python API. This is a Bad Thing. There are at least four reasons why: Standards compliance Completeness Maintainability Beauty Standards compliance SQL DDL is a standard. That means if you want something more sophisticated than Emacs, you can choose any of half a dozen modeling tools like ERwin or ER/Studio to generate and edit your DDL. The Python data definition APIs, by contrast, aren't even compatibile with other Python tools. You can't take a table definition

Python at Mozy.com

At my day job, I write code for a company called Berkeley Data Systems. (They found me through this blog, actually. It's been a good place to work.) Our first product is free online backup at mozy.com . Our second beta release was yesterday; the obvious problems have been fixed, so I feel reasonably good about blogging about it. Our back end, which is the most algorithmically complex part -- as opposed to fighting-Microsoft-APIs complex, as we have to in our desktop client -- is 90% in python with one C extension for speed. We (well, they, since I wasn't at the company at that point) initially chose Python for speed of development, and it's definitely fulfilled that expectation. (It's also lived up to its reputation for readability, in that the Python code has had 3 different developers -- in serial -- with very quick ramp-ups in each case. Python's succinctness and and one-obvious-way-to-do-it philosophy played a big part in this.) If you try it out, pleas

A review of 6 Python IDEs

(March 2006: you may also be interested the updated review I did for PyCon -- http://spyced.blogspot.com/2006/02/pycon-python-ide-review.html .) For September's meeting, the Utah Python User Group hosted an IDE shootout. 5 presenters reviewed 6 IDEs: PyDev 0.9.8.1 Eric3 3.7.1 Boa Constructor 0.4.4 BlackAdder 1.1 Komodo 3.1 Wing IDE 2.0.3 (The windows version was tested for all but Eric3, which was tested on Linux. Eric3 is based on Qt, which basically means you can't run it on Windows unless you've shelled out $$$ for a commerical Qt license, since there is no GPL version of Qt for Windows. Yes, there's Qt Free , but that's not exactly production-ready software.) Perhaps the most notable IDEs not included are SPE and DrPython. Alas, nobody had time to review these, but if you're looking for a free IDE perhaps you should include these in your search, because PyDev was the only one of the 3 free ones that we'd consider using. And if you aren