dbfUpsizing questions

Use this forum for questions and answers regarding PostGreSQL and the PGDBE.
Post Reply
Message
Author
User avatar
rdonnay
Site Admin
Posts: 4774
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

dbfUpsizing questions

#1 Post by rdonnay »

I have installed PostGreSQL 12 server on my Dell OptiPlex 980 with Windows 10 64-bit installed.

I bought this inexpensive, refurbished computer just to run as a web server.
It is running several websites, including this phpBB forum and works fast with no delays.

Now I am experimenting with upsizing some data and I am miffed at how slow it is.
It has been moving data from a 270mb file so slowly that I have hardly seen the progress bar move in the past 2 hours.
The CPU is running at 60%, all of it being used by PostGreSQL server.
The DBFUpsize program is running at .1%, so it appears that it is always waiting for the server.

I looked at the minimum requirements for PostGreSQL and it seems that this CPU should be sufficient for a much better performance. At this rate, I suspect that upsizing my 96 files (about 6gb) of data will take days to complete. This, of course, is unacceptable.

Intel core 650 - 3.3 Ghz, 2 cores
8 gb physical ram

What experiences do users of PostGreSQL have with upsizing your data?
The eXpress train is coming - and it has more cars.

User avatar
Auge_Ohr
Posts: 1422
Joined: Wed Feb 24, 2010 3:44 pm

Re: dbfUpsizing questions

#2 Post by Auge_Ohr »

hi,

have Tom not tell you how slow "Upsize" of PgDBE is :liar:
open Taskmanager and look at Network how "Speed" is.

i use a Celeron N3150 with 8GB as PostgreSQL Server.
UpSize_small_Speed_20MB.jpg
UpSize_small_Speed_20MB.jpg (184.16 KiB) Viewed 13165 times
compare it and "see" how slow PgDBE is.
greetings by OHR
Jimmy

User avatar
Tom
Posts: 1205
Joined: Thu Jan 28, 2010 12:59 am
Location: Berlin, Germany

Re: dbfUpsizing questions

#3 Post by Tom »

It takes around 90 minutes to upsize a data base of 500 files with around 500 MB all together on an average machine, PG 12. I recommend to build the indexes after upsizing (INDEX ON/OrdCreate()). I migrated the upsizing process, but at least there is little control possible over it. Upsize speed decreases with the table size and the amount of fields.
Best regards,
Tom

"Did I offend you?"
"No."
"Okay, give me a second chance."

User avatar
Auge_Ohr
Posts: 1422
Joined: Wed Feb 24, 2010 3:44 pm

Re: dbfUpsizing questions

#4 Post by Auge_Ohr »

hi Roger,

have you change default Setting in postgresql.conf :?:

try this on your G650, 2 Core, 8GB RAM

Code: Select all

max_connections = 20
shared_buffers = 2GB
effective_cache_size = 6GB
maintenance_work_mem = 512MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
work_mem = 104857kB
min_wal_size = 1GB
max_wal_size = 4GB
max_worker_processes = 2
max_parallel_workers_per_gather = 1
max_parallel_workers = 2
max_parallel_maintenance_workers = 1
what those Variable are for can be read here
https://wiki.postgresql.org/wiki/Tuning ... SQL_Server
greetings by OHR
Jimmy

User avatar
rdonnay
Site Admin
Posts: 4774
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: dbfUpsizing questions

#5 Post by rdonnay »

I recommend to build the indexes after upsizing (INDEX ON/OrdCreate()).
I had never thought of that.

It failed after 12 hours when it got to a database that had some kind of error.
There should have been an option to continue the process and just flag the bad database.
Instead, I now have to kill the database and start all over.

It sounds like the upsizing part of the process can be the most troublesome.
It would be handy if DBFUPSIZE had more options.
I noticed that it is written in Xbase++.
I am going to ask Alaska if they will provide the source code so I can customize it.

I am going to stop the upsizing and restart it with no indexes.
The eXpress train is coming - and it has more cars.

User avatar
rdonnay
Site Admin
Posts: 4774
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: dbfUpsizing questions

#6 Post by rdonnay »

have you change default Setting in postgresql.conf :?:
I haven't touched it yet.
Waiting for advice from the users.

Thanks.
The eXpress train is coming - and it has more cars.

User avatar
Auge_Ohr
Posts: 1422
Joined: Wed Feb 24, 2010 3:44 pm

Re: dbfUpsizing questions

#7 Post by Auge_Ohr »

hi Riger,

try it with "Original" and new Settings in postgresql.conf
need to "shutdown" Postgre Service

open Taskmanager -> Network and make a Snapshot please.
greetings by OHR
Jimmy

User avatar
rdonnay
Site Admin
Posts: 4774
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: dbfUpsizing questions

#8 Post by rdonnay »

I recommend to build the indexes after upsizing (INDEX ON/OrdCreate())
This made a huge difference in upload speed.

The file that was taking hours to upload finished in a few minutes.

The processor was evenly split between the DBFUPSIZE program and PostGreSQL service.
It was still about 60%.

The final test, of course, will be evaluating how fast it will be for OrdCreate() to finish the job.
I realize that this is not the performance to expect when running on a more powerful server where the Medallion system currently resides.
I expect much better performance and will be evaluating that in the next week or so.

Also, the failure of a data file to upsize went away.
It must have been some corruption in the index.

I'm interested in how the OrdCreate() process works with PGDBE.
Does it run a stored procedure on the server to handle the entire process?
If so, then it would make sense that it would be much faster, but then why is it so slow during upsizing?
It seems that the upsizer would be using the same process.
The eXpress train is coming - and it has more cars.

k-insis
Posts: 120
Joined: Fri Jan 28, 2011 4:07 am

Re: dbfUpsizing questions

#9 Post by k-insis »

The fault on slow speed are extensive use of triggers: isam_rowversion_update , isam_rowversion_update and each table adds own triggers ( which combined is expensive when having isam type indexes):

CREATE TRIGGER parts_isam_rowversion
AFTER UPDATE
ON public.parts
FOR EACH ROW
EXECUTE PROCEDURE public.isam_rowversion_update();

So each insert/update/delete subsequently executes those triggers for recno(), deleted(), index_key? simulated fields. That means for single write OP there will be three or more additional writes.

On top of that , each insert requires increase of corresponding table sequence (recno() simulation?)

This will play hard on resources for databases with lots of users and a lots of writes. Does not count much for small data though.

Post Reply