Open topic with navigation
SQL system properties are key/value settings that are visible to every Transaction Engine (TE) or Storage Manager (SM) process and client connection in the database. They are stored persistently in the
SYSTEM.PROPERTIES table (see PROPERTIES System Table Description) and are modifiable by the
SET SYSTEM PROPERTY command (see SET).
Note: Only a user that has been granted the system
ADMINISTRATOR role can modify system property values.
This is set to
falseby default and typically does not need modified. If set to
true, it defaults to an older cost model mechanism for the SQL query optimizer.
Previously, the NuoDB cost model synthetically added 1000 rows to all small tables for the purposes of computing access costs. As more optimizations have been added this synthetic addition has been found to complicate optimization in certain cases (for instance, joins between large and small tables and subquery optimization). We have therefore changed the cost model such that an additional 10 rows are added to all tables, rather than 1000 rows to only small tables. This changes selectivity estimates slightly, which may mean slightly different plans. If you find a query performs worse because the plan has changed, you can recover the previous behavior by setting
This is an integer multiplier of the
--mem memory value that is used to set the memory limit (per connection) for all blocking SQL engine operations - hash-grouping, sorting, sorting via priority queue for limit, distinct (via hash-grouping), union, listagg, table functions, and stored
procedure returning result set that accumulate data - in main memory. The default value is
0 and is backwards compatible, that is, no checking is performed whether any of the afore mentioned operations exceed a memory limit. The maximum value that may be set is either
10 or the value of the hard memory limit (whichever is smaller). When specifying a value for
DEFAULT_CONNECTION_MEMORY_LIMIT, consider also the value set for the
--mem connection property. For example, if
--mem is set to 3GB, and
DEFAULT_CONNECTION_MEMORY_LIMIT's multiplying factor is set to 4, then memory-intensive operations can use up to 12GB memory per connection.
Note: When the memory limit is exceeded, any query currently being executed is aborted and an error is generated.
For NuoDB internal testing only.
The maximum number of open result sets per connection. The default is
The maximum number of open SQL statements per connection. The default is
A new materialization strategy is used to avoid repeated computations of the same subquery multiple times. A subquery will materialize its results in an in-memory table to be reused by the outer query. The
MAX_MATERIALIZED_QUERY_SIZE property specifies the maximum number of bytes that a materialized table can use. If the data to be stored is greater than this limit, it will not be cached and the query will be executed every time the query results are needed. The default is
67108864 bytes (64 MB).
MAX_MATERIALIZED_QUERY_SIZE is used as a limit for each query that could be materialized. If your query has two separate
IN SELECT clauses, your total memory consumption can be up to
2*MAX_MATERIALIZED_QUERY_SIZE. If this property is set to
0, no subqueries are materialized.
The maximum number of statements stored in the slow query log. The default is
The maximum number of precompiled statements to be cached in memory. The default is
500. See LOCALSTATEMENTS System Table Description for more information. Setting
0 empties the current cache and disables statement caching.
The minimum length of time (in seconds) it takes a SQL query to execute (from start to finish) before it is reported in the slow query log (see QUERYSTATS System Table Description). The default is
10. If the sql-statement-metrics logging category is specified, or the debug logging level is set, the following information is also written to the log:
2018-04-20T09:48:12.293+0200  [6,2690]:SQL metrics for statement: <original SQL statement> ==> COMPILETIME: 380, EXECTIME: 601468, RUNTIME: 601848, TIMESTAMP: 1524210491691483, NUMPARAM: 0, PARAMS: , USER: CLOUD, SCHEMA: USER, CONNID: 6, EXECID: 0, TRANSID: 2690, EXECUTIONCOUNT: 3, CLIENTINFO: nuosql, CLIENTHOST: 127.0.0.1, CLIENTPID: 3469, INDEXHITS: 0, INDEXFETCHES: 0, EXHAUSTIVEFETCHES: 0, INDEXBATCHFETCHES: 0, EXHAUSTIVEBATCHFETCHES: 0, RECORDSFETCHED: 0, RECORDSRETURNED: 1, INSERTS: 0, UPDATES: 0, DELETIONS: 0, REPLACES: 0, RECORDSSORTED: 0, UPDATECOUNT: 0, EVICTED: 0, LOCKED: 0, REJECTEDINDEXHITS: 0, USEDMEMORYBYTES: 0
For information on logging levels supported, see Using Log Files.
If the sql-statement-explain-plans logging category is specified, or the debug logging level is set, the following information is also written to the log:
2018-04-20T09:48:12.293+0200  [6,2690]:EXPLAIN ANALYZE: <the EXPLAIN plan of the statement>
The minimum length of time (in seconds) it takes a SQL query to execute (from start to finish) before the entry in the log controlled by the MIN_QUERY_TIME property is logged with the
warn logging level (rather than the
debug logging level). The default is
For NuoDB internal testing only.
Used to turn on (
true) or off (
false) the use of statistics by the optimizer. The default is
true. The gathering of statistics happens regardless of this property setting. This property just determines whether or not those statistics are used. We recommend not changing the default value.
This property is used to automatically roll back the transaction chosen as the victim when a deadlock takes place. By default, this is set to
false (default), then for a given update operation, such as
SELECT..FOR UPDATE or
DELETE, in a
READ COMMITTED or
WRITE COMMITTED transaction, if it encounters an uncommitted insert operation then that insert operation is going to be blocked. If this property is set to
true, then the insert operation is not blocked.
The percentage of change in the number of rows to a table that cause precompiled statements using the table to be re-compiled. The default is 0.10 (10%). Compiled statements using a table having more than 10% of its rows changed will have its cache statement invalidated. This will force a re-compilation the next time the statement is used.
The maximum number of deterministic User-Defined Functions (UDFs) that will be cached in memory. The cache stores any type of UDF, regardless of how it is implemented (Java or SQL). A deterministic UDF is one that is run with the same arguments. When the cache is full, the oldest entry is removed. If that combination of arguments is used again, the UDF code is executed to compute the result. The default is
DB_TRACE, DB_TRACE_TABLE, DB_TRACE_PATTERN, DB_TRACE_MIN_TIME, DB_TRACE_PROCEDURES
These system properties are used to define SQL tracing at the
GLOBAL level. These system properties cannot be modified by the
SET command. See Using the SQL Trace Facility for more information on setting these system properties.