xTuple.com xTupleU Blog & News Customer Support

xTuple 5.0 customer workbench can crash client

I’ve got a problem with a particular customer account in xtuple. Typing the “bad” customer number into the “Customer #:” line edit control will cause the xtuple client to “hour glass” and freeze until the computer issues an “out of memory” error and then you have to close the client. I can’t copy the error message since the computer is pretty non-functional at that point but I did clip the message from tailing the postgres error log (pg 9.6). Here is the error::

2019-10-01 13:15:29 CDT admin fppprod - LOG: duration: 181987.350 ms statement: SELECT cust., addr_line1 AS description, addr_id, addr_active, addr_line1, addr_line2, addr_line3, addr_city, addr_state, addr_postalcode, addr_country, addr_notes, addr_number, cntct_id, cntct_addr_id, cntct_first_name, cntct_last_name, cntct_honorific, cntct_initials, cntct_active, getcontactphone(cntct_id, ‘Office’) AS cntct_phone, getcontactphone(cntct_id, ‘Mobile’) AS cntct_phone2, getcontactphone(cntct_id, ‘Fax’) AS cntct_fax, cntct_email, cntct_webaddr, cntct_notes, cntct_title, cntct_number, cntct_middle, cntct_suffix, cntct_owner_username, cntct_name, formatAddr(addr_line1, addr_line2, addr_line3, ‘’, ‘’) AS street FROM ( SELECT cust_id AS id, cust_number AS number, cust_name AS name, cust_active AS active, cust_creditstatus, crmacct_id, true AS iscustomer, cust_cntct_id FROM custinfo JOIN crmacct ON cust_crmacct_id = crmacct_id WHERE NULL IS NULL or cust_number ~ UPPER(NULL) UNION ALL SELECT prospect_id AS id, prospect_number AS number, prospect_name AS name, prospect_active AS active, ‘G’ AS cust_creditstatus, crmacct_id, false AS iscustomer, getcrmaccountcontact(prospect_crmacct_id) AS cust_cntct_id FROM prospect LEFT OUTER JOIN crmacct ON prospect_crmacct_id = crmacct_id WHERE NULL IS NULL or prospect_number ~* UPPER(NULL) ) cust LEFT OUTER JOIN cntct ON cust_cntct_id = cntct_id LEFT OUTER JOIN addr ON cntct_addr_id = addr_id WHERE true AND (id=6752) ;

I’m pretty certain the SQL is coming from the “custCluster.cpp” file. I wonder if the “WHERE NULL IS NULL or cust_number ~* UPPER(NULL)” is causing the process to return ALL records. I’m just trying to keep my customer running so I don’t have time to dig in much deeper.

Any ideas or suggestions? Other customers work ok but so far we’ve identified a couple that create this crash.

Jim Wirt

I’m going to update the results of my working on this problem.

When we import a customer we tag the imported data with a sentinel string. It is “XXIMP0607XX”. It is easy to search for and let’s us know that a customer service rep didn’t manually enter the customer. The sentinel is saved in the addr_notes column of the public.addr record created during the insert.

At least one of the xt5 triggers does some work in trying to consolidate some sort of crm or customer records. When I tail -f the postgres logs I see a lot of entries that display our sentinel string flying by. When I browse into the postgres log file I see at least 2 million rows of my sentinel being written out. I’m sure it is part of a “slow query” log.

The “addr_notes” field is what was causing my client to crash. When I eliminated the “addr_notes” file from the query I posted above then I can work as expected. I’m suggesting that a trigger or function that tries to consolidate addressess is causing that problem. I see a GIANT sized text entry in the “addr_notes” file that is just composed of a lot (100,000 or millions) of our sentinel string concatenated together.



What version did you upgrade from? This does not look like a bug introduced in xTuple ERP 5.0.

I suspect it’s related to the duplicate address handling in the saveAddr() function. This was changed for 4.10.0-beta, perhaps incorrectly. That in turn may have uncovered an ancient bug (2007 or earlier) that could let the addr_notes grow indefinitely — there’s an inequality check on the notes field, not a similarity or containment check.


I upgraded from 4.11. I think you’re on the right area there. When tailing the postgres log it records entries for slow functions. That produces a LOT of traffic in the logs. When this problem happens I see row after row of our sentinel string trailing by and at the end is some parameter queries with the text “CHANGEONE”. I see that same text in the “saveaddr()” function.