Time Nick Message 20:57 lavamind ack 20:57 rangi yeah you wont need them then 20:57 lavamind the imports were recorded as being made 7 years ago 20:57 lavamind I'm pretty sure that they do, because they've never complained even though the cleanup script deletes them 20:55 rangi if its for teaching you could zero those tables out. THey can populate them by using the stage marc records if they need 20:53 lavamind I'm not sure if they do it, probably though, I guess I should find out 20:53 rangi that would be testing the whole workflow 20:53 rangi they could always load a marc file via stage marc records to test that 20:53 lavamind well one of the classes teaches acquisitions 20:53 rangi librarians can see it, other users can't 20:52 lavamind it's for classes, so that each student gets their own koha instance from a template 20:52 rangi yep the import_ stuff sits outside the catalogue 20:52 rangi then you can order from that acquisitions 20:52 lavamind there's supposed to be a bunch of borrowers, items in the catalogue, etc 20:52 rangi you can stage that in tools 20:52 rangi one acquisition workflow (that your library probably doesn't use) is that book sellers give your the catalogue data for their inventory 20:52 lavamind well it's not supposed to be a fresh koha 20:51 rangi but you are probably ok to wipe them out on a fresh koha 20:51 rangi whenever you do a z3950 search, the results are stored in there too 20:51 * lavamind is just the koha admin, not a koha user 20:50 rangi it's called the reservoir there 20:50 rangi (cataloguing, not search the catalogue ;)) 20:50 rangi you will find them too 20:49 * lavamind looks 20:49 rangi also if you search in cataloguing 20:49 rangi its where records sit outside of the catalogue, that you can either bring into the catalogue via the staged marc record management, or via acquisitions 20:48 rangi use them 20:48 rangi and staged marc record management (under tools) 20:48 rangi acquisitions 20:47 rangi yes they are 20:47 lavamind I don't think these are used for anything inside Koha ? 20:47 lavamind for example there are many many records in the import_record table 20:47 lavamind yes of course, only the data hehe 20:46 rangi not the tables themselves 20:45 rangi the cron job only removes data from it 20:45 rangi hmm? 20:45 lavamind since the cleanup cronjob remove it anyway, no sense in putting it in the database in the first place 20:45 lavamind I think import_* doesn't have much use in a default sql 20:44 lavamind I also found https://github.com/Koha-Community/Koha/blob/master/misc/cronjobs/cleanup_database.pl 20:44 lavamind rangi: thank you 20:44 rangi lavamind: http://schema.koha-community.org/ 20:36 lavamind I have a bunch of import_* tables with some data in it, I'm not sure I need it in there 20:35 lavamind where can I find a description of the database tables 19:57 magnuse i have a file with 33869 records 19:55 magnuse ..per file, i mean 19:54 magnuse anyone got a hunch about the max number of records the stage/import tools can handle? 19:52 lavamind agreed 19:41 Joubu the easiest method is to add the rebuild -f command after you inserted your dump 19:38 lavamind each time I'd have to regenerate those records 19:38 lavamind yeah, however that seems like more of a hassle, especially since I'll have to update the sql from time to time 19:35 Joubu you could fill zebraqueue 19:32 lavamind Joubu: yeah, I know, I'm wondering whether I can shape my default.sql.gz to avoid having to run it 19:28 Joubu koha-rebuild-zebra -f will trigger a full reindex 19:21 lavamind I'm not sure how I can trigger it 19:21 lavamind I think the way my default SQL is set up, it may not trigger a full reindex 19:21 lavamind my biblio table is full of records however 19:19 lavamind in the default.sql.gz I use when creating my instance, zebraqueue is empty 19:17 lavamind Joubu: hrmm 19:15 Joubu so yes it's expected to launch a full reindex if your record tables already contain something 19:15 Joubu the indexer daemon is watching the zebraqueue table, which is filled on import/create/delete/update of biblio or auth records 19:11 lavamind once I run the full rebuild, search starts working 19:11 lavamind (searching doesn't work) 19:11 lavamind my instance gets created normally, and the indexer daemon is started, but the index seems empty 19:10 lavamind when creating an instance with a default database, is it normal to require "koha-rebuild-zebra -f" ? 17:46 lavamind Joubu: thanks 17:43 Joubu lavamind: tcohen (Tomas), droft (Mirko) and mtj (Mason) are the ones who knows more about this topic 17:40 lavamind Joubu: good idea, I'll put it in my TODO list 17:40 Joubu lavamind: you could start a discussion on koha-devel, at least to tell people you are willing to help and explain what you have in mind/how things can be enhanced 17:39 lavamind the puppet module is about 75% done (for a 1.0 release) 17:39 lavamind well, north-america eastern :P 17:39 lavamind (Eastern timezone) 17:38 lavamind I might be able to attend during $work hours 17:38 lavamind it owuld probably help if there were sprints organized to work on this 17:37 lavamind the difficult part is ensuring a smooth transition 17:37 lavamind I think with a little work it could be made to be policy compliant 17:36 lavamind eventually it would be best to upload these packages to the main archive 17:36 kidclamp cait++ 17:36 lavamind but time is more or less lacking these days 17:35 lavamind Joubu: well I'm a DD :P 17:35 Joubu lavamind: if you have time and knowledge, feel free to help. We clearly lack people on this side 17:28 lavamind I'm not a fan of the monolithic "one package for everything" 17:27 lavamind ideally there would be a seperate package eg. koha-common-plack with pulls in all that stuff 17:26 lavamind I think I may have run apt --no-install-recommends at some point 17:25 lavamind Joubu: well recommends are installed by default 17:25 Joubu been* 17:25 Joubu lavamind: yes sure. But it's what my jessie install tells too. No idea why it has not be reported before 17:23 lavamind otherwise, it should be documented 17:22 lavamind so if the goal if to make the package hard-depend on plack, it should be added as a Dependency 17:22 lavamind running 'aptitude why libcgi-compile-perl' : i A libplack-perl Recommends libcgi-compile-perl 17:21 Joubu lavamind: ok hopefully Mirko will take a look soon 17:20 lavamind Joubu: once I installed libcgip-compile-perl, I restarted plack and it started to work (intranet) 15:24 huginn` Bug 20920: normal, P5 - low, ---, koha-bugs, Needs Signoff , Plack timeout because of missing CGI::Compile Perl dependency 15:24 lavamind another trivial patch https://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20920 15:06 reiveune bye 14:56 ashimema and you caught me on one of my community days ;) 14:56 ashimema haha.. no worries.. they were nice easy one's to test ;) 14:55 Joubu Thanks ashimema for your quick signoffs! 13:54 tuxayo Those who use `delete_patrons.pl` , how do you blacklist borrowers categories? (to avoid collateral damages on "strange" accounts) Do you call `delete_patrons.pl` multiple times with each `--category_code` that needs cleaning? 13:54 tuxayo Hi :) 13:48 ashimema brill 13:47 Joubu TableExists even 13:47 ashimema :) 13:47 Joubu there is a TableExist sub in updatedatabase.pl 13:47 ashimema thanks Joubu 13:47 ashimema okies.. 13:47 ashimema or is that something the RMaints, RManagers expected to do as part of the RM process 13:46 Joubu s/should/must 13:46 huginn` Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20271 major, P1 - high, ---, oha, Failed QA , Merge deleted* tables with their "alive" cousins 13:46 ashimema i.e. the update in bug 20271 should check for the existence of the deleted* tables before attempting to migrate and delete them 13:46 Joubu yes 13:46 ashimema tcohen.. am I right in thinking atomic updates should always attempt to be idempotent these days? 13:40 LeeJ even though I technically never left :) 13:40 LeeJ hi #koha 13:40 * LeeJ waves 13:39 ashimema indeed 13:35 tcohen it needs to happen at some point anyway, better sooner than later 13:25 ashimema I'm so going to regret passing qa on it soon enough though aren't I.. bet it's going to lead to all sorts of backporting fun this cycle ;) 13:25 ashimema I'm jsut giving it the second eye ;) 13:25 ashimema mercelr did all the hard work really.. ;) 13:21 tcohen ashimema: great job! 13:06 ashimema with that very minor comment I think that bug is pretty much good to go.. I'm just running it through the test suit a few times 13:01 ashimema but the pref description needs updating at the very least to not reference deletedbiblio as a table ;) 13:00 ashimema I agree with keeping it.. 12:58 tcohen ashimema: OAI-PMH repositories need to specify how they deal with deleted records, which yields different harvsting strategies. I think we should keep it configurable to keep the current behaviour. If that's what you're asking 12:55 ashimema as I've not dabbled there at all I don't feel qualified to comment on what the syspref should be changed to say (or whether it is indeed still relevant with deletedbiblio table going away 😉) 12:54 huginn` Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=20271 major, P1 - high, ---, oha, Failed QA , Merge deleted* tables with their "alive" cousins 12:54 ashimema I was going to ask them to take a look at bug 20271 from the OAI perspective ;) 12:54 ashimema haha.. wasn't a question.. 12:53 Joubu ashimema: shoot the question and you will see 12:53 Joubu hi 12:49 ashimema whose the best person to ask about the oai pmh code these days? 12:10 Hugo any idea what it could be? 12:10 Hugo I have this error at zebra zebrasrv(1617) [log] dict_lookup_grep: (\x01}\x01\x04)\x01\x14\"\x01\x1C\x01\x1A\x01\x12\x01\x01\x01\x09\x01\x01\x01\x09 12:10 Hugo morning 10:04 barneyjobs hello? 10:03 barneyjobs hello po 09:52 alex_a mmh 09:52 kidclamp yeah, go ahaed, I can also take a look if you wantr 09:52 kidclamp I think you mean 20073? 18235 was never pushed 09:52 alex_a Or i can try a rebase (if your are ok) 09:51 alex_a What is the plan? Waiting for 18235 being pushed again? 09:51 huginn` Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=18213 enhancement, P5 - low, ---, nick, Signed Off , Add language facets to Elasticsearch 09:51 alex_a kidclamp, bug 18213 does not apply. Probably because 18235 has been reverted 09:50 alex_a \o 09:50 * kidclamp waves 09:50 alex_a kidclamp, around? 07:44 ashimema guten morgen #koha 07:27 gaetan_B hello 06:29 reiveune hello 06:26 alex_a bonjour 05:53 caroline NZ? 05:53 caroline What is the active time zone right now? 05:53 caroline is anybody around? 05:52 caroline hello #koha! 05:43 fridolin hi there