Unlocking logs when a POSTGRESQL client disconnects abnormally
-
- Posts: 174
- Joined: Thu Nov 05, 2020 10:51 am
- Location: DOMINICAN REPUBLIC
Unlocking logs when a POSTGRESQL client disconnects abnormally
I have started my POSGRESQL jobs recently. I was able to start the POSTGRESQL session and use the PGDBE driver, without any problem. But, I had an error with an undefined variable, while running the program, and exited abruptly with Alt+C. Since then, some records, from some tables, have been locked permanently. Even if you turn off the computer, when you turn it on again, the records are still locked.
Is there a way to prewrite, or unlock, the POSTGRESQL session, so that the records are unlocked?
When the client application aborts its connection to the server, or the connection to the application is abnormally terminated or stopped, ADS resets all locks. The default Advantage Database Server client timeout is 120 seconds. Since POSTGRESQL is more powerful, it is supposed to have this kind of reset or unlock tool on abnormality.
Someone who can help me, please?
Best regards
Is there a way to prewrite, or unlock, the POSTGRESQL session, so that the records are unlocked?
When the client application aborts its connection to the server, or the connection to the application is abnormally terminated or stopped, ADS resets all locks. The default Advantage Database Server client timeout is 120 seconds. Since POSTGRESQL is more powerful, it is supposed to have this kind of reset or unlock tool on abnormality.
Someone who can help me, please?
Best regards
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Pgdbe (when using 'upsize' tool) uses a addition to every table, namely :
__deleted boolean NOT NULL DEFAULT false,
__record integer NOT NULL DEFAULT nextval('sometable___record_seq'::regclass),
__rowversion integer NOT NULL DEFAULT 0,
__keyversion integer NOT NULL DEFAULT 0,
__lock_owner integer NOT NULL DEFAULT 0,
And without data it might be that table.__lock_owner has something written inside it apart from 0 (zero).
Use pgadmin to check contents of those fields at problematic tables.
__deleted boolean NOT NULL DEFAULT false,
__record integer NOT NULL DEFAULT nextval('sometable___record_seq'::regclass),
__rowversion integer NOT NULL DEFAULT 0,
__keyversion integer NOT NULL DEFAULT 0,
__lock_owner integer NOT NULL DEFAULT 0,
And without data it might be that table.__lock_owner has something written inside it apart from 0 (zero).
Use pgadmin to check contents of those fields at problematic tables.
Diego Euri Almanzar wrote: ↑Sat Apr 09, 2022 7:49 pm I have started my POSGRESQL jobs recently. I was able to start the POSTGRESQL session and use the PGDBE driver, without any problem. But, I had an error with an undefined variable, while running the program, and exited abruptly with Alt+C. Since then, some records, from some tables, have been locked permanently. Even if you turn off the computer, when you turn it on again, the records are still locked.
Is there a way to prewrite, or unlock, the POSTGRESQL session, so that the records are unlocked?
When the client application aborts its connection to the server, or the connection to the application is abnormally terminated or stopped, ADS resets all locks. The default Advantage Database Server client timeout is 120 seconds. Since POSTGRESQL is more powerful, it is supposed to have this kind of reset or unlock tool on abnormality.
Someone who can help me, please?
Best regards
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
You're right, Diego. If the app is killed, the lock stays. As k-insis mentioned, there is a table row "__lock_owner" in every table, which is 0 (zero) if a record is not locked. If it contains an integer value, the record is (and stays) locked. This value is not removed if an app doesn't close and quit normally. I reported the problem to Alaska Software, but I think, this will be hard to fix. The ADS keeps locks in memory, not in the tables, so if a session times out, all locks for tables used by a session get invalid. Since the PGDBE stores the lock in the table, there is much more to do.
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Code: Select all
UPDATE public.<mytablename> SET __lock_owner = 0
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
-
- Posts: 174
- Joined: Thu Nov 05, 2020 10:51 am
- Location: DOMINICAN REPUBLIC
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Excellent information. Anyway, I'll write to Alaska Software, and let them know what answer they give me.
Thank you, K-INSIS and TOM.
Thank you, K-INSIS and TOM.
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Can you copy/paste reply by AS ? It would be interesting if and if, what they say about that (quite big) issue.
Unless they use some additional timestamp and limit 'lock' on limited timeslot, or even use another table with information.... . Concept of record locking in sql is quite different to file based in dbf .
Unless they use some additional timestamp and limit 'lock' on limited timeslot, or even use another table with information.... . Concept of record locking in sql is quite different to file based in dbf .
Diego Euri Almanzar wrote: ↑Mon Apr 11, 2022 9:34 pm Excellent information. Anyway, I'll write to Alaska Software, and let them know what answer they give me.
Thank you, K-INSIS and TOM.
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Alaska opened up a PDR for the issue: https://www.alaska-software.com/scripts ... PDRID=7491
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
-
- Posts: 174
- Joined: Thu Nov 05, 2020 10:51 am
- Location: DOMINICAN REPUBLIC
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Hello,Tom
well, i'm worried. Because I went to the link you gave me, and Alaska explains that she has no solution for that problem.
In any case, I will send an email to Alaska, since in a year of subscription that corresponds to me, I have only sent an email. Maybe a miracle will happen and they will give me another kind of answer.
Thank you.
well, i'm worried. Because I went to the link you gave me, and Alaska explains that she has no solution for that problem.
In any case, I will send an email to Alaska, since in a year of subscription that corresponds to me, I have only sent an email. Maybe a miracle will happen and they will give me another kind of answer.
Thank you.
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Hi, Diego.
https://www.alaska-software.com/support/kbase-old.cxp
There is no solution yet, since they just started working on a solution. If you face this problem, replace __lock_owner from inside pgAdmin or using the SQL statement I posted before. We have huge PGDBE/ISAM-apps (late beta) running in a productive environment at several customer sites, and we didn't face this problem in reality. This maybe by design - we never lock records while data is in revision, but only if the data is written. This reduces lock times to some seconds a day. And a "normal" runtime error ensures an unlock. The deadlock problem occurs only if a fatal error or an ALT-C come up. You can switch off ALT-C for your app if you want (SetCancel(.F.)).
I'm quite sure this problem will be fixed in the next update.
The problem is a part of the "Defect Tracking System" and became a "Problem Report":Because I went to the link you gave me, and Alaska explains that she has no solution for that problem.
https://www.alaska-software.com/support/kbase-old.cxp
There is no solution yet, since they just started working on a solution. If you face this problem, replace __lock_owner from inside pgAdmin or using the SQL statement I posted before. We have huge PGDBE/ISAM-apps (late beta) running in a productive environment at several customer sites, and we didn't face this problem in reality. This maybe by design - we never lock records while data is in revision, but only if the data is written. This reduces lock times to some seconds a day. And a "normal" runtime error ensures an unlock. The deadlock problem occurs only if a fatal error or an ALT-C come up. You can switch off ALT-C for your app if you want (SetCancel(.F.)).
I'm quite sure this problem will be fixed in the next update.
Best regards,
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Tom
"Did I offend you?"
"No."
"Okay, give me a second chance."
Re: Unlocking logs when a POSTGRESQL client disconnects abnormally
Locking just before change and unlocking right after update is more sensible solution, including for regular dbf / it spares server resources.
Same situation with 'hanging' locks happens if connection between client and server is lost while app is running with active locks.
But if AS might move this functionality to server side set of stored procedures which would keep a list of locked records inside access table depending on active connections PIDs (from "SELECT * FROM pg_stat_activity" ) and pgsql via rlock(), dbunlock(), flock(), etc would just request status change, status check and status remove for table.___recno by executing those stored procedures with parameters.
A single stored procedure might just be pushed by pg_cron job once per minute to remove all orphaned indices for locks from table if there is no longer active connection for a certain rlock() or flock() .
Sure a bit more traffic and some server cpu time/iops, but it would not depend on client to properly function anymore.
But hard to say what they will do at end.
Same situation with 'hanging' locks happens if connection between client and server is lost while app is running with active locks.
But if AS might move this functionality to server side set of stored procedures which would keep a list of locked records inside access table depending on active connections PIDs (from "SELECT * FROM pg_stat_activity" ) and pgsql via rlock(), dbunlock(), flock(), etc would just request status change, status check and status remove for table.___recno by executing those stored procedures with parameters.
A single stored procedure might just be pushed by pg_cron job once per minute to remove all orphaned indices for locks from table if there is no longer active connection for a certain rlock() or flock() .
Sure a bit more traffic and some server cpu time/iops, but it would not depend on client to properly function anymore.
But hard to say what they will do at end.
Tom wrote: ↑Wed Apr 13, 2022 12:07 am Hi, Diego.
The problem is a part of the "Defect Tracking System" and became a "Problem Report":Because I went to the link you gave me, and Alaska explains that she has no solution for that problem.
https://www.alaska-software.com/support/kbase-old.cxp
There is no solution yet, since they just started working on a solution. If you face this problem, replace __lock_owner from inside pgAdmin or using the SQL statement I posted before. We have huge PGDBE/ISAM-apps (late beta) running in a productive environment at several customer sites, and we didn't face this problem in reality. This maybe by design - we never lock records while data is in revision, but only if the data is written. This reduces lock times to some seconds a day. And a "normal" runtime error ensures an unlock. The deadlock problem occurs only if a fatal error or an ALT-C come up. You can switch off ALT-C for your app if you want (SetCancel(.F.)).
I'm quite sure this problem will be fixed in the next update.