After tuning Postgresql, PgBench results are worse -
i testing postgresql on 8gb ram/4 cpus/ 80gb ssd cloud server digital ocean. ran pgbench default settings in postgresql.conf, , altered common settings--shared_buffers, work_mem, maintenance_work_mem, effective_cache_size--to reflect 8gb of ram. after running 2nd set of tests, noticed of results worse. suggestions on why might be? rather new pgbench , tuning postgresql in general.
settings:
- shared_buffers = 2048mb
- work_mem = 68mb
- maintenance_work_mem = 1024mb
- effective_cache_size = 4096mb
tests:
- pgbench -i -s 100
- pgbench -c 16 -j 2 -t 60 -u postgres postgres
- pgbench -s -c 16 -j 2 -t 60 -u postgres postgres
- pgbench -c 16 -j 4 -t 60 -u postgres postgres
- pgbench -s -c 16 -j 4 -t 60 -u postgres postgres
- pgbench -c 16 -j 8 -t 60 -u postgres postgres
- pgbench -s -c 16 -j 8 -t 60 -u postgres postgres
how effective these tests? effective way employ pgbench? how should customize tests reflect data , server instance?
what mean "worse"? how long time run pgbench? test should executed 2hour minimum realistic values. version of postgresql have?
attention: should careful interpretation pgbench result. should optimize execution of application, not pgbench. pgbench hw or sw checking, bad tool optimizing of postgresql configuration.
a mentioned configuration variables basic configuration , cannot wrong there (server must not use swap actively ever - , these variables ensure it).
a formula use:
-- dedicated server 8gb ram shared_buffers = 1/3 .. 1/4 dedicated ram effecttive_cache_size = 2/3 dedicated ram maintenance_work_mem > higher big table (if possible) else 1/10 ram else max_connection * 1/4 * work_mem work_mem = precious setting based on slow query analyse (first setting 100mb) --must true max_connection * work_mem * 2 + shared_buffers + 1gb (o.s.) + 1gb (filesystem cache) <= ram size
usually default values of wal buffer size , checkpoint segments low too. , can increase it.
Comments
Post a Comment