UNLOGGED table CREATEd on one connection not immediately visible to another connection

  • Jump to comment-1
    Geoff Winkless<pg@ukku.uk>
    Jan 29, 2026, 3:05 PM UTC
    Hi
    In our application we have a situation where once a day one process
    CREATEs an UNLOGGED table and INSERTs several hundred records using
    individual queries (no explicit transactions) all of which return
    successfully. We then send the ID of the table that we have created
    over a TCP socket to a second process, which runs a query that JOINs
    against that new table.
    Unfortunately quite often the second process is getting a
    PGRESFATALERROR with
    Primary: relation "qreftmp750" does not exist
    Now (and this is very important) this appears to be a race condition,
    because when that process immediately retries the same query (which we
    do when we get FATAL_ERROR) it sometimes works on the second or third
    (or even 11th) attempt.
    If we were somehow failing to create the table then the retries would
    never work, and we absolutely don't send the qreftmp ID to the second
    process until we've successfully INSERTed all of the records, so the
    race isn't on the application side. There's no explicit transactions
    in either process involved, they all just use implicit autocommit, so
    I don't see that this can be a DDL versioning issue.
    I'm loathe to point the finger at PG because I'm sure that if this
    were a real issue it would have been flagged up well before now, but
    I've been staring at our code for days and I'm stumped. Any
    suggestions?
    Thanks
    Geoff
    • Jump to comment-1
      Tom Lane<tgl@sss.pgh.pa.us>
      Jan 29, 2026, 3:41 PM UTC
      Geoff Winkless <pg@ukku.uk> writes:
      In our application we have a situation where once a day one process
      CREATEs an UNLOGGED table and INSERTs several hundred records using
      individual queries (no explicit transactions) all of which return
      successfully. We then send the ID of the table that we have created
      over a TCP socket to a second process, which runs a query that JOINs
      against that new table.
      Unfortunately quite often the second process is getting a
      PGRESFATALERROR with
      Primary: relation "qreftmp750" does not exist
      Now (and this is very important) this appears to be a race condition,
      because when that process immediately retries the same query (which we
      do when we get FATAL_ERROR) it sometimes works on the second or third
      (or even 11th) attempt.
      Perhaps the second process is querying a standby server rather than
      the primary? Replication lag could explain this. I don't really
      believe that it's possible within a single PG server, though.
      We take very substantial pains to avoid the race condition you're
      positing.
      If there is anything in your client software stack capable of issuing
      implicit begin/commit, that'd deserve a second/third/fourth look...
      		regards, tom lane
    • Jump to comment-1
      Adrian Klaver<adrian.klaver@aklaver.com>
      Jan 29, 2026, 3:09 PM UTC
      On 1/29/26 07:04, Geoff Winkless wrote:
      Hi
      In our application we have a situation where once a day one process
      CREATEs an UNLOGGED table and INSERTs several hundred records using
      individual queries (no explicit transactions) all of which return
      successfully. We then send the ID of the table that we have created
      over a TCP socket to a second process, which runs a query that JOINs
      against that new table.
      Unfortunately quite often the second process is getting a
      PGRESFATALERROR with
      Primary: relation "qreftmp750" does not exist
      Now (and this is very important) this appears to be a race condition,
      because when that process immediately retries the same query (which we
      do when we get FATAL_ERROR) it sometimes works on the second or third
      (or even 11th) attempt.
      If we were somehow failing to create the table then the retries would
      never work, and we absolutely don't send the qreftmp ID to the second
      process until we've successfully INSERTed all of the records, so the
      race isn't on the application side. There's no explicit transactions
      in either process involved, they all just use implicit autocommit, so
      I don't see that this can be a DDL versioning issue.
      I'm loathe to point the finger at PG because I'm sure that if this
      were a real issue it would have been flagged up well before now, but
      I've been staring at our code for days and I'm stumped. Any
      suggestions?
      Provide the code for the procedure(s) that create the table and send the ID to the other process.
      Question, why is this not run in a single process?
      Thanks
      Geoff
      -- Adrian Klaver
      adrian.klaver@aklaver.com