Is anyone using PostGreSQL

Xbase++ 2.0 Build 554 or later
Message
Author
k-insis
Posts: 120
Joined: Fri Jan 28, 2011 4:07 am

Re: Is anyone using PostGreSQL

#11 Post by k-insis »

Did you take into account, that freshly imported tables (from .dbf -->> Postgre) are probably not indexed at server side as resoult of database export/import ? Were there (if any) indexes?

if not, you should create appropritate keys (prim/sec/foreign) to speed up queries.

User avatar
Auge_Ohr
Posts: 1422
Joined: Wed Feb 24, 2010 3:44 pm

Re: Is anyone using PostGreSQL

#12 Post by Auge_Ohr »

hi,

as i know (beta 519) pgDBE does NOT use "real" PostgreSQL Index.

using "UpSizeDBF" XML with Xbase++ "Index" will create "internal" Fields for each "Index" using IndexKey() String
you can "speed-up" when create "real" PostgreSQL Index on each "internal" Field to "SEEK", but it does not help "SKIP" much.

when using PostgreSQL API v9.x (Alaska still use outdated v8.x API) you can use "RowNumber()" which work like MySQL "RowID" to create a "extra" Column on-fly.
now "SKIP", e.g. in a Browser "Skipper", can fast "SKIP" Rows.

as Hector say you can use LIMIT / OFFSET to "navigate" in a SQL Table, but with big OFFSET it will be slow ( try GoBottom vs. GoTop )
PostgreSQL v9.x API does have MOVE to "speed-up" OFFSET.




PostgreSQL does not perform slower than ADS or MySQL, it is how Alaska try to implement ISAM Style to use "old" Xbase (all Version) Code with SQL.
when using "native" DLL or ODBC (e.g. with SqlExpress++) you can build fast Xbase++ Application using SQL.
greetings by OHR
Jimmy

Gerrit
Posts: 12
Joined: Fri Aug 03, 2012 10:20 am

Re: Is anyone using PostGreSQL

#13 Post by Gerrit »

Just a final follow up. I contacted Alaska regarding the postgresql.conf settings and this is the response I received. I does make sense, as there are many factors that affect database performance.
----
Hello Mr Ferwerda,

We wish you a Happy New Year 2015.

Thank you very much for sharing your thoughts.

It is true that you can modify pretty much settings in the PostgreSql configuration file.
If you like to fine tune those settings in context of your application, there is not much
wrong about it – If you exactly know what these settings are doing to the inner works
of the server system.

Everything you are doing there is in context of your application and its usage from the
amount of clients working with your application. Changing code or modifying tables
or modifying the amount of concurrent users might contradict the optimizations you
have done. Insofar, laying hands on the server config file will enforce you rethinking the
settings on a regulare base.

Because the settings depend on the application to such a high degree, we will not
give guidance or best practice advice on what to do in this area – There is no best
practice. What we can see here is that the PostgreSql people have found a very well
balanced default settings set.

In case you like to fine tune your application on the server level then pick up the books
writtin in that area and do the experiments.

Our advice when it comes to optimizations is not the buffer sizes. Changes in this area
have high potentials for introducing risk of race conditions and things like that. Our
advice is:

* Carefully create SQL indexes. Dietmar Schnitzer has written a nice posting in the
public.xbase++.postgresql newsgroup how he managed to speed up things by
magnitues (not only the factor 10) by simply creating an index on _one_ ISAM index
field.
* Invest in good hardware. Using a hard disk with 10000 RPM will boost processing
very much compared to 7500 RPM hard drives.
* Have your transaction log on a dedicated hard drive
* Use an SSD drive for the transaction log file.

You may consider to cross post this mail to Rogers board as here is the advice on
where to start optimizing things.

I hope this helps,

With my best regards,

Andreas Herdt
Alaska Technical Support

Post Reply