Time |
S |
Nick |
Message |
01:21 |
|
|
khall joined #koha |
01:22 |
|
|
aleisha joined #koha |
03:08 |
|
mtj |
hi Joubu: ive uploaded a Data::Session pkg, re bug 17427 |
03:08 |
|
huginn |
Bug https://bugs.koha-community.or[…]_bug.cgi?id=17427 normal, P5 - low, ---, jonathan.druart+koha, Needs Signoff , Replace CGI::Session with Data::Session |
06:55 |
|
|
cait joined #koha |
06:58 |
|
|
paul_p__ joined #koha |
09:20 |
|
cait |
tuxayo: hm maybe via the developer tools? |
09:20 |
|
cait |
that's how i did it in the past |
09:20 |
|
cait |
when testing without javascript |
09:34 |
|
tuxayo |
cait: I tried with those tools to mess with JS and the HTML/DOM but couldn't find what to mess with to disable validation |
09:35 |
|
tuxayo |
Without JS the form doesn't show ^^" |
09:44 |
|
cait |
oh |
10:22 |
|
|
khall joined #koha |
10:32 |
|
|
khall_ joined #koha |
12:50 |
|
|
khall joined #koha |
15:52 |
|
|
dpk_ joined #koha |
16:01 |
|
|
JStal joined #koha |
16:06 |
|
JStal |
Hello! Just discovered that our import_records table is 1.4GiB (!). Ran cleanup_database.pl --z3950 to clear it, expecting the table to truncate to a smaller size, but that didn't happen. Attempting to truncate through phpMyAdmin yields a foreign key error—what should I do to clean this up? |
16:30 |
|
tuxayo |
hi JStal :) |
16:31 |
|
tuxayo |
Did the number of records from the table decreased with the cleanup? |
16:31 |
|
JStal |
Hi! They did—the table has zero records now. |
16:35 |
|
tuxayo |
JStal: that's actually surprising, --z3950 should not purge this table |
16:35 |
|
tuxayo |
https://git.koha-community.org[…]_database.pl#L708 |
16:40 |
|
JStal |
Interesting! You're right—my import_batches table is also clear. We don't often use the import function, instead relying on z39.50, so the import_batches table isn't very large. This post from Bywater indicates that cleanup might clear the import_records table, too? https://bywatersolutions.com/n[…]osing-weight-koha |
16:44 |
|
JStal |
"There’s also a hidden side to import_records. When you search another catalog using Z39.50 from the staff client, every record that gets returned is also stored in import_records. ... A recent update to Koha added the “–z3950†option to cleanup_database, which will purge all records coming from Z39.50 sources." |
16:45 |
|
JStal |
After switching to koha-shell and exporting PERL5LIB and KOHA_CONF I ran "perl /usr/share/koha/bin/cronjobs/cleanup_database.pl --z3950 --confirm" specifically. This may be the intended outcome, but I'm surprised that the table isn't truncated. |
16:56 |
|
tuxayo |
JStal: so both table got their records purged? |
16:56 |
|
tuxayo |
the post doesn't mention import_batches so i'm confused |
16:56 |
|
tuxayo |
also |
16:56 |
|
tuxayo |
https://git.koha-community.org[…]_database.pl#L676 |
16:56 |
|
tuxayo |
PurgeImportTables should help |
16:58 |
|
tuxayo |
https://mariadb.com/kb/en/optimize-table/ |
16:58 |
|
tuxayo |
https://mariadb.com/kb/en/defr[…]nodb-tablespaces/ |
16:58 |
|
tuxayo |
↑↑ maybe it's the actual issue |
16:58 |
|
cait |
i seem to remember that the dbms doesn't give the space up |
16:58 |
|
cait |
when you truncate a table |
16:59 |
|
cait |
tuxayo: is that what you were pointing out? |
17:00 |
|
tuxayo |
cait: it's seems to be that. I did know there was that behaviour until searching that now. |
17:01 |
|
cait |
I usually don't deal with that - just remember something our admin told me |
17:05 |
|
JStal |
Sorry, I must have misunderstood. I thought the first link was demonstrating that the cleanup_database shouldn't clear import_records (but import_batches instead) with the --z3950 command. Line 676 seems to support that, but I only ran the command once, and the import_records table is now empty (but still large). |
17:08 |
|
JStal |
From https://mariadb.com/kb/en/defr[…]odb-tablespaces/: "Note that tablespace files (including ibdata1) will not shrink as the result of defragmentation, but one will get better memory utilization in the InnoDB buffer pool as there are fewer data pages in use." |
17:09 |
|
JStal |
Will this shrink the table on disk? I'm mostly concerned about speed, but that table is larger than every other table combined several times. |
17:11 |
|
tuxayo |
> but I only ran the command once, and the import_records table is now empty |
17:11 |
|
tuxayo |
I don't know why both the cleaned. That's the odd thing right? |
17:11 |
|
tuxayo |
*both got cleaned |
17:12 |
|
tuxayo |
> Note that tablespace files (including ibdata1) will not shrink as the result of defragmentation |
17:12 |
|
tuxayo |
I guess it's OPTIMIZE TABLE that fits, then |
17:13 |
|
JStal |
Definitely odd. |
17:13 |
|
JStal |
From OPTIMIZE: Note that tablespace files (including ibdata1) will not shrink as the result of defragmentation, but one will get better memory utilization in the InnoDB buffer pool as there are fewer data pages in use. |
17:13 |
|
tuxayo |
I forgot to say that don't have experience with all of this ^^" Just finding stuff that seems to fit the current use case |
17:19 |
|
JStal |
Oh, you've definitely moved me further than I was! I appreciate it. My searches were all Koha-specific, so I'll broaden them. |
18:26 |
|
|
paul_p__ joined #koha |
22:15 |
|
|
alexbuckley joined #koha |
23:05 |
|
|
cait joined #koha |