In commit b6923420f3 (MDEV-29445)
some hash tables were accidentally created with the minimum size
(101 entries) instead of correctly deriving the size from the
initial innodb_buffer_pool_size. This led to very long hash bucket
chains, which are very slow to traverse.
ut_find_prime(): Assert that the size is nonzero in order to catch
this type of regression in the future.
innodb_init_params(): Do not bother reading buf_pool.curr_size()
when it is known to be 0,
srv_start(): Correctly initialize srv_lock_table_size to 5 times
buf_pool.curr_size(), that is, the buffer pool size in pages,
between invoking buf_pool.create() and lock_sys.create().
btr_search_enable(), dict_sys_t::create(), dict_sys_t::resize():
Correctly refer to buf_pool.curr_pool_size(), that is,
innodb_buffer_pool_size in bytes, when calculating the hash table size.
In MDEV-29445 the expressions buf_pool_get_curr_size() were
accidentally replaced with buf_pool.curr_size().
buf_buddy_shrink(): Properly cover the case when KEY_BLOCK_SIZE
corresponds to the innodb_page_size, that is, the ROW_FORMAT=COMPRESSED
page frame is directly allocated from the buffer pool, not via the
binary buddy allocator.
buf_LRU_check_size_of_non_data_objects(): Avoid a crash when the
buffer pool is being shrunk.
buf_pool_t::shrink(): Abort if over 95% of the shrunk buffer pool
would be occupied by the adaptive hash index or record locks.
In commit b6923420f3 (MDEV-29445)
we started to specify the MAP_POPULATE flag for allocating the
InnoDB buffer pool. This would cause a lot of time to be spent
on __mm_populate() inside the Linux kernel, such as 16 seconds
to pre-fault or commit innodb_buffer_pool_size=64G.
Let us revert to the previous way of allocating the buffer pool
at startup. Note: An attempt to increase the buffer pool size by
SET GLOBAL innodb_buffer_pool_size (up to innodb_buffer_pool_size_max)
will invoke my_virtual_mem_commit(), which will use MAP_POPULATE
to zero-fill and prefault the requested additional memory area, blocking
buf_pool.mutex.
Before MDEV-29445 we allocated the InnoDB buffer pool by invoking
mmap(2) once (via my_large_malloc()). After the change, we would
invoke mmap(2) twice, first via my_virtual_mem_reserve() and then
via my_virtual_mem_commit(). Outside Microsoft Windows, we are
reverting back to my_large_malloc() like allocation.
my_virtual_mem_reserve(): Define only for Microsoft Windows.
Other platforms should invoke my_large_virtual_alloc() and
update_malloc_size() instead of my_virtual_mem_reserve() and
my_virtual_mem_commit().
my_large_virtual_alloc(): Define only outside Microsoft Windows.
Do not specify MAP_NORESERVE nor MAP_POPULATE, to preserve compatibility
with my_large_malloc(). Were MAP_POPULATE specified, the mmap()
system call would be significantly slower, for example 18 seconds
to reserve 64 GiB upfront.
log_t::append_prepare_wait(): Do not attempt to read log_sys.write_lsn
because it is not protected by log_sys.latch but by write_lock, which
we cannot hold here. The assertion could fail if log_t::write_buf()
is executing concurrently, and it has not yet executed log_write_buf()
or updated log_sys.write_lsn.
Fixes up commit acd071f599 (MDEV-21923)
There were two issues with the test:
1. A race between a race_condition.inc and status variable, where the
status variable Rpl_semi_sync_master_status could be ON before the
semi-sync connection finished establishing, resulting in
Rpl_semi_sync_master_clients showing 0 (instead of 1). To fix this,
we simply instead wait for Rpl_semi_sync_master_clients to be 1
before proceeding.
2. Another race between a race_condition.inc and status variable,
where the wait_condition waited on a process_list command of
'BINLOG DUMP' to disappear to infer the binlog dump thread was
killed, to where we then verified semi-sync state was correct
using status variables. However, the 'BINLOG DUMP' command is
overridden with a killed status before the semi-sync tear-down
happens, and thereby we could see invalid values. The fix for
this is to change the wait_condition to instead wait for the
connection with the replication user is gone, because that stays
through the binlog dump thread tear-down life-cycle
As seen with openwrt and some other distros, the
determination of hostname can sometime need alternate
commmands.
This provides a cmake option HOSTNAME for non-windows machines
for the mariadb-install-db and mariadbd-safe scripts
and the support-files init scripts..
The minimum statistics level now is rocksdb::StatsLevel::kDisableAll.
The default remains rocksdb::StatsLevel::kExceptHistogramOrTimers
which is now 1 (it used to be 0).
When the SELECT sub-statement executes a stored function that is defined
to modify a non-transactional table, like
delimiter |;
create function f_ia(arg int)
returns integer
begin
insert into ti_pk set a=1;
insert into ta set a=1;
insert into ti_pk set a=arg;
return 1;
end |
delimiter ;|
any modified records that the function has succeeded
on must be binlogged as a "side effect" of CREATE-SELECT.
It is expected that a failing CREATE-SELECT like
--error ER_DUP_ENTRY
set statement binlog_format = ROW for create table t_y (a int) engine=aria select f_ia(1 /* err in Innodb after Aria stmt is done */) as a;
leaves upon itself the following state:
include/show_binlog_events.inc
Log_name Pos Event_type Server_id End_log_pos Info
master-bin.000001 # Gtid # # BEGIN GTID #-#-#
master-bin.000001 # Table_map # #
table_id: # (test. ta)
master-bin.000001 # Write_rows_v1 # #
table_id: # flags: STMT_END_F
master-bin.000001 # Query # # COMMIT
select * from ta;
a
1
select count(*) = 0 from ti_pk;
true
However it's not so for the binlog part.
The reason is that prior to MDEV-34150 fixes the CREATE-SELECT's
errored phase leaves the binlog caches intact (the file:pos from 10.11 c06c36218a)
to defer their reset to the rollback phase of the top-level
/* the statement cache gets binlogged */
where the side-effect changes gets binlogged.
MDEV-34150 fixes harmed (+#4 line) the statement cache in particular
in the error phase (file:pos are from 395db6f1d5 the current 11.8 )
/* The caches incl the statement cache are gone */
/* 'cos of MDEV-34150 */
+#4 0x00005d75f9b6a92e in THD::binlog_remove_rows_events (this=0x52c000240288) at log.cc:579
Apparently it should not have been there, as proper emptying (either
with reset for the transactional cache or flush and then reset for the
statement cache) is (must be) always done via binlog_rollback of the
top-level statement.
To observe the above requirement the case is fixed with the removal of
thd->binlog_remove_rows_events() and its definition.
Tested with rpl.rpl_create_select_row.
Reviewed-by Brandon Nesterenko.
Hopefully, this ends the long story of CapabilityBoundingSet
in mariadb.service.
Started from MDEV-9095 (27e6fd9a59) which was supposed
to let --memlock work without root, but instead of
adding the necessary capability (CAP_IPC_LOCK) by putting it into
AmbientCapabilities it removed all other capabilities,
by putting CAP_IPC_LOCK into CapabilityBoundingSet
(which is the mask of allowed capabilities).
This broke pam plugin, which needed CAP_DAC_OVERRIDE,
it was fixed in MDEV-19878 (dd93028dae) by appending
CAP_DAC_OVERRIDE to CapabilityBoundingSet.
Obviously, memlock still didn't work, this was fixed
in MDEV-33301 (76a27155b4) by moving CAP_IPC_LOCK
to AmbientCapabilities.
Unfortunately, it moved too much (everything), so
MDEV-36229 (85ecb80fa3) fixed it moving CAP_DAC_OVERRIDE
back to CapabilityBoundingSet.
This caused MDEV-36591 (8925877dc8) triggering a bug in old
systemd versions. And it broke pam plugin on CentOS Stream 10,
where CAP_DAC_OVERRIDE alone was apparently not enough.
Let's finally fix this by removing CapabilityBoundingSet
completely and keeping CAP_IPC_LOCK in AmbientCapabilities,
which should've been the correct fix for MDEV-9095 from the start.
never estimate that a graph search will visit more nodes than there
are in the graph. In fact, let's reduce the graph size by 30%, it'll
increase the false positive rate of a bloom filter by 2% when
visiting the whole graph, it doesn't affect recall noticeably.
we need to read the shared graph size under a lock. let's store it
in the thread-local unused TABLE::used_stat_records member.
Get rid of need of matherialization for usual INSERT (cache results in
Item_cache* if needed)
- subqueries in VALUE do not see new records in the table we are
inserting to
- subqueries in RETIRNING prohibited to use the table we are inserting to
According to buildbot cross-reference atomic.alter_table is failing ~10
times a day due to 900 seconds test case timeout. Split it into two
tests: atomic.alter_table_myisam and atomic.alter_table_innodb.
It should reduce failure frequency down to once a day or so, similarly
to atomic.alter_table_aria test.
Diabling InnoDB for MyISAM/Aria/RocksDB tests makes them 20-35% faster.
inside the get_lookup_field_values when lower_case_table_names's value
is 2 this should reserve the case for the table's and database's names
so this commit changes the condition to lowercase only when the
lower_case_table_name's value is 1 not just 1 and 2 "any value not equal
0"
SELECT FROM information_schema.metadata_lock_info may sporadically
return MDL_SHARED lock acquired by InnoDB purge thread.
Fix proposed in a7bf0a42d0 wasn't enough to solve this problem:
apparently it did protect against purging old rows, but not against
locking the table.
Reverted a7bf0a42d0 in favour of solution proposed in e996f77cd8.
That is use wait_all_purged.inc to make sure purge is completed.
Reason:
=======
- MDEV-16239 does apply the DML logs after bulk insert for
ALTER TABLE..ALGORITHM=COPY, but InnoDB fails to reset the bulk_insert
in ha_innobase::extra(HA_EXTRA_END_ALTER_COPY). This leads to crash
while applying DML logs.
Solution:
=======
ha_innobase::extra(HA_EXTRA_END_ALTER_COPY): Reset TRX_DDL_BULK at the
end of bulk insert operation
When a replica stops an established semi-sync connection, it is
supposed to kill the corresponding binlog dump thread on the primary
server. However, when connections are configured to use SSL, this new
connection created by the replica to kill the dump thread doesn't have
any logic to configure SSL options, and thereby the connection can't be
made, and the dump thread will never be killed.
This patch adds logic to configure the semi-sync kill connection with
SSL. The exising logic to set up the connection options for the regular
connection was extracted into a function that the semi-sync kill
connection invokes.
Co-author: Brandon Nesterenko <brandon.nesterenko@mariadb.com>
A statement SET GLOBAL innodb_buffer_pool_size=...
could fail for no good reason when the buffer pool contains many
pages that can actually be evicted.
buf_flush_LRU_list_batch(): Keep evicting as long as the buffer pool
is being shrunk, for at most innodb_lru_scan_depth extra blocks.
Disregard the flush limit for pages that are marked as freed in files.
buf_flush_LRU_to_withdraw(): Update the to_withdraw target during
buf_flush_LRU_list_batch().
buf_pool_t::will_be_withdrawn(): Allow also ptr=nullptr (the condition
will not hold for it).
This fixes a regression that was introduced in
commit b6923420f3 (MDEV-29445)
and caught by the test innodb.temp_truncate_freed in MariaDB Server 11.4.
Tested by: Thirunarayanan Balathandayuthapani
Reviewed by: Thirunarayanan Balathandayuthapani
Combined AmbientCapabilities and CapabilityBoundingSet configuration
within a service file we have found by testing aren't supported in the
systemd v245 (Ubuntu 20.04) and v239 (RHEL8) for non-root users. This
resulted in a service start error EXIT_CAPABILITIES, a systemd limitation
of the version that we cannot work around consequences.
The systemd version 247 these combined capabilities have been tested to
work on Debian 11. No other supported major distros run systemd
version 246, and if they did, the missing capability of CAP_IPC_LOCK
won't be noticed as it was a convenience for --memlock users.
As such we disable the AmbientCapabilites for CAP_IPC_LOCK rather
that disabling the CapabilityBoundingSet, because doing the later
will disable authentication for MariaDB users that have configured PAM
with MariaDB.
Should a user require CAP_IPC_LOCK they can append in their own
systemd overlay file this configuration in the CapabilityBoundingSet
and configure the capability file attributes on the mariadbd executable
to have the IPC_LOCK capability. This isn't configured by default as the
presence of a capability in the MariaDB Server is detected by
openssl libraries as "insecure" which will then ignore any user configured TLS
configuration file passed though by the OPENSSL_CONF environment variable.
Test changes only. Idea of the test is to test two concurrent
nonconflicting DDL-clauses. Therefore, use debug sync to
really execute two DDL-clauses as concurrently as Galera
allows.
Signed-off-by: Julius Goryavsky <julius.goryavsky@mariadb.com>
Set solution is to check if transaction, which modified a record, is
still active in lock_clust_rec_read_check_and_lock(). if yes, then just
request a lock. If no, then, depending on if the current transaction read
view can see the changes, return eighter DB_RECORD_CHANGED or request a
lock.
We can do the check in lock_clust_rec_read_check_and_lock() because
transaction tries to set a lock on the record which cursor points to after
transaction resuming and cursor position restoring. If the lock already
exists, then we don't request the lock again. But for the current commit
it's important that lock_clust_rec_read_check_and_lock() will be invoked
again for the same record, so we can do the check again after
transaction, which modified a record, was committed or rolled back.
MDEV-33802(4aa9291) is partially reverted. If some transaction holds
implicit lock on some record and transaction with snapshot isolation level
requests conflicting lock on the same record, it should be blocked instead
of returning DB_RECORD_CHANGED to have ability to continue execution when
implicit lock owner is rolled back.
The construction
--------------------------------------------------------------------------
let $wait_condition=
select count(*) = 1 from information_schema.processlist
where state = 'Updating' and info = 'UPDATE t SET b = 2 WHERE a';
--source include/wait_condition.inc
--------------------------------------------------------------------------
is not reliable enought to make sure transaction is blocked in test
case, the test failed sporadically with
--------------------------------------------------------------------------
./mtr --max-test-fail=1 --parallel=96 lock_isolation{,,,,,,,}{,,,}{,,} \
--repeat=500
--------------------------------------------------------------------------
command. That's why it was replaced with debug sync-points.
Reviewed by: Marko Mäkelä
during mariabackup --prepare
Reason:
======
During --prepare of partial backup, if InnoDB encounters the redo log
for the excluded tablespace then InnoDB stores the space id in dirty
tablespace list during recovery, anticipates that it may encounter
FILE_* redo log records in the future. Even though we encounter FILE_*
record for the partial excluded tablespace then we fail to replace the
name in dirty tablespace list. This lead to missing of
FILE_* redo log records error.
Solution:
========
fil_name_process(): Rename the file name from "" to name encountered
during FILE_* record
recv_init_missing_space(): Correct the condition to print the warning
message of missing tablespace during mariabackup restore process.