Time  Nick         Message
00:00 jcamins      Useful might be over-selling it.
00:00 jcamins      "Useful" implies "would be listened to."
00:13 rangi        :)
00:47 wizzyrea     The fox is <reply> http://www.youtube.com/watch?v=jofNR_WkoCE
00:48 eythian      wahanui: stonehenge is <reply>http://youtu.be/mbyzgeee2mg
00:48 wahanui      OK, eythian.
00:48 eythian      I think that one's better anyway
00:49 wizzyrea     the fox relevant because well. Kids.
00:50 wizzyrea     that is completely silly
00:54 mtompset     Greetings, #koha.
01:10 rambutan     hi mtompset
01:26 jcamins      Hey, does anyone have Koha + MySQL on a test server to do a test for me real quick?
01:26 wizzyrea     sure, sup
01:26 jcamins      I'm trying to confirm that deleting a biblio from the staff client updates the timestamp to the date deleted in deletedbiblio.
01:27 jcamins      The test plan is as follows: create biblio, note biblio.timestamp from MySQL client. Delete biblio using "Delete record" button in staff client. Check deletedbiblio.timestamp for that biblionumber.
01:27 wizzyrea     yep sec
01:29 * dcook      thinks that works these days but didn't back in the day
01:29 * dcook      has a bunch of badd deletedbiblio timestamps from 3.2
01:30 dcook        I haven't tested though right now though, so I'd wait for liz on this one..
01:30 wizzyrea     2013-09-30 14:30:16 < record I just deleted timestamp from deletedbiblio
01:31 jcamins      Woohoo!
01:31 jcamins      Thanks.
01:31 wizzyrea     that was edit -> delete items, then edit -> delete record
01:31 wizzyrea     that is what you meant right?
01:32 jcamins      Yup.
01:32 wizzyrea     cool
01:32 jcamins      It'd make me happy if I could run a report every day for all records touched in the last day.
01:33 rambutan     jcamins: that would be useful information?
01:34 wizzyrea     yep, I just did it again, timestamp looks just fine in deletedbiblio
01:34 jcamins      Exceedingly useful.
01:34 jcamins      It would allow for keeping a discovery layer synchronized without the trouble of setting up an actual push notification for changes.
01:37 dcook        You could use OAI :p
01:37 rambutan     well, hummm
01:37 dcook        That might not fit your purposes though
01:39 jcamins      dcook: I may be mistaken, but my understanding of OAI is that it doesn't really allow for incremental updating.
01:40 dcook        It depends on how you implement it
01:40 dcook        The spec has selective harvesting built-in
01:40 dcook        It just depends on how often you run  your harvests and if you update your "from" date with each harvest
01:40 dcook        And whether your repository honours that "from" date properly
01:40 dcook        At the moment, Koha doesn't
01:40 dcook        brb
01:42 jcamins      dcook: nor does Koha report deleted records, making the use of OAI with Koha exceedingly problematic at best.
01:47 dcook        I think there's a bug for reporting deleted records at some point :p
01:49 dcook        bug 3206
01:49 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=3206 enhancement, P5 - low, ---, gmcharlt, NEW , OAI repositry deleted record support
01:49 dcook        I guess it's not an active one though :p
01:50 jcamins      Doesn't seem to be.
01:50 dcook        I might work on it someday, depending on how that union catalogue idea pans out
01:51 dcook        bug 10824 is actually just awaiting sign off
01:51 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10824 major, P5 - low, ---, dcook, Needs Signoff , OAI-PMH repository/server not handling time in "from" argument
01:51 dcook        I should actually revise that to say "from" and "until" parameters...
02:10 dcook        I am having an issue where I get a software error after harvesting 34,000 records from one Koha instance...
02:10 dcook        I imagine 680 HTTP requests might upset any server...
02:11 dcook        Although there are a few seconds between each request, me thinks
02:39 mtj          peeps  - shall i log a bug to delete .the greybox/GreyBox_v5_5.zip file(s) in Koha?
02:40 mtj          ...they are unneeded yes? (and duplicated per lang and theme?)
02:40 * dcook      has no idea about Greybox
02:41 rangi        i cant see why we would need a zip file
02:48 mtompset     Hmmm....
02:56 * dcook      ponders
02:56 dcook        OAI-PMH repositories are supposed to use UTC timestamps
02:56 dcook        Koha does not...
02:57 dcook        Nor does Koha acting as a OAI server..
03:00 eythian      bgkriegel: should bug 9611 have its status changed, or are you still working on it?
03:00 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=9611 enhancement, P5 - low, ---, srdjan, Needs Signoff , Changing the password hashing algorithm from MD5 to more secure Bcrypt
03:01 bgkriegel    in a minit
03:01 bgkriegel    i'm writing my last comment
03:01 wizzyrea     your last comment EVAR?! NOOOOOO
03:01 bgkriegel    hehe
03:02 bgkriegel    :)
03:10 bgkriegel    done ! :)
03:13 mtompset     EVAR?
03:13 mtj          a random pkg question… when installing a new koha package - where are the db updates logged too?
03:15 wizzyrea     /var/log/apt/term.log
03:15 wizzyrea     (at least it was this morning when I looked at it)
03:16 mtj          hmm, thats blank for me
03:19 mtj          perhaps that  DB-upgrade info only gets sent to the screen, and not a file - i might be imagining it?
03:20 wizzyrea     hm it was in the one I looked at this morning
03:20 wizzyrea     maybe it's different per apt config
03:20 wizzyrea     you installed the package using apt-get eh?
03:23 mtj          aah, i just noticed my 'version' was higher than my package
03:23 mtj          (prolly testing a master build before)
03:24 wizzyrea     oh so did you install it with dpkg?
03:24 mtj          yeah, sorry  - dpkg -i ./foo.deb
03:24 wizzyrea     ah okies :) then it won't be in the apt logs. :)
03:24 wizzyrea     but it might be in /var/log/dpkg.log
03:25 mtj          yeah, there was indeed some info in there
03:29 mtj          cool, was my high version number, causing the problem :)
03:41 bgkriegel    bye
04:04 mtj          hmm, i've just spotted a subtle bug in updatedb.pl - for the version checking code…
04:04 mtj          if ( C4::Context->preference("Version") < TransformToNum($DBversion) ) {
04:04 mtj          seems to work better than...
04:05 mtj          if ( CheckVersion($DBversion) ) {
04:05 mtj          .
04:05 eythian      I just found a situation my csvtomarc script can't handle. Embedded newlines in a title...
04:06 eythian      mtj: in that case I guess CheckVersion should be fixed.
04:06 mtj          wow, i cant blame you for not expecting that eythian
04:06 eythian      liberty is not the most sane software out there.
04:13 mtj          the mentioned bug seems to be that CheckVersion() skips updatedb blocks with $VERSION values that are more than 3dp
04:14 mtj          so, for these 3 updates - the middle one gets skipped...
04:14 mtj          3.12.05.000
04:14 mtj          3.12.05.0001
04:14 mtj          3.12.05.001
04:15 mtj          ...which is a gotcha if you are attempting to use 'local' version numbers, for your updates
04:15 cait         eythian++
04:15 eythian      mtj: 3.12.05.0001 == 3.12.05.001
04:16 eythian      in version number arithmetic
04:16 eythian      the dots aren't decimal places, they're simply breaking up sections, and it's an integer in each section.
04:20 mtj          hmm, so whats a better string to give the 2nd  'local' update?
04:21 eythian      are you supposed to add another section or something?
04:21 eythian      well hang on
04:21 eythian      you have anohter section
04:21 eythian      so 3.12.05.002
04:22 eythian      ah no, that's internal
04:22 eythian      I think you can just chuck another section on.
04:22 eythian      or increment the already existing minor number, but that's probably less ideal.
04:24 mtj          yeah, well i thought i *was* adding another section on :)
04:24 eythian      you need a . for that :)
04:25 mtj          well, i thought the dots were simply ignored by koha (as you suggested?)
04:26 mtj          ahh, actually you didnt quite say that before… did you :)
04:26 eythian      nope
04:29 mtj          fwiw - i did initially experiment with an extra '.'  but got some weird error, so assumed i was doing it wrong :/
04:29 eythian      I'm not sure if it's right, but it's how it should be done.
04:29 eythian      oh, hi cait.
04:30 mtj          morning cait :)
04:43 dcook        15 minutes until The Signal!
04:43 * dcook      also waves to cait
04:44 mtj          thanks for the info eythian - i'll poke about some more re: version numbers
04:45 eythian      cool. let me know what you work out, too
04:45 dcook        Oh noes...it's Monday...
04:45 eythian      not for too much longer though
04:45 * dcook      would also be curious as to what you find, mtj
04:46 dcook        Well, still Sunday in Vancouver..barely Monday in Toronto..
04:46 eythian      so behind the times.
04:46 eythian      wahanui: new zealand
04:46 wahanui      http://www.buzzfeed.com/daves4/hardest-parts-about-living-in-new-zealand
04:47 eythian      wahanui: new zealand
04:47 wahanui      http://www.buzzfeed.com/daves4/hardest-parts-about-living-in-new-zealand
04:47 eythian      wahanui: new zealand is also in the future. We have jetpacks.
04:47 wahanui      okay, eythian.
04:48 dcook        hehe
04:49 dcook        I should probably listen to local radio but...I like the host and the show in Canada :/
04:54 dcook        argh..
04:59 dcook        wtf Koha...
04:59 wahanui      hmmm... koha is a free software ils see http://koha-community.org for more info
05:02 * dcook      wonders if anyone uses Koha as an OAI-PMH server, because if they do...it's not working anywhere near correctly
05:03 dcook        Or maybe every one of my patches fixes one thing and breaks another..
05:05 cait         dcook: we had someone testing it for feeding their discovery interface
05:05 cait         they noted some problems about it
05:05 cait         the big problem was that there was no option to include item information
05:05 cait         but they were also not so happy about the missing deleted records
05:05 dcook        Those are optional things though
05:05 cait         and there was something about paging giving you an error when there were no more records
05:05 dcook        These are fundamental core problems
05:05 dcook        Yep
05:06 dcook        Actually, no, I haven't seen that
05:06 dcook        The resumption tokens don't appear to work when using dates with times though
05:07 dcook        Including item information would be easy
05:07 dcook        Showing deleted records wouldn't be too hard either
05:08 cait         can only tell you the problems we ran into
05:08 dcook        Sorry
05:08 dcook        I appreciate you telling me that :)
05:08 * dcook      is just unreasonably frustrated and didn't mean to direct his frustration your way
05:09 cait         it's ok ;)
05:12 dcook        I guess I'll fix the resumption tokens...
05:13 * dcook      feels like he's slowly becoming the oai guru...
05:13 mtj          hmm, there is a bug around the version number stuff -  its not just me being stupid
05:13 dcook        Maybe because no one else cares about oai :p
05:19 dcook        Now that I look at everything again...I think I finally understand...
05:19 dcook        Wait...no, it still wouldn't have worked...
05:20 dcook        Maybe bug 10824 isn't relevant if we consider Koha to just handle a granularity of YYYY-MM-DD...
05:20 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10824 major, P5 - low, ---, dcook, Needs Signoff , OAI-PMH repository/server not handling time in "from" argument
05:20 dcook        But the resumption token thing is still busted..
05:24 dcook        huzzah. Changing the colons to slashes fixes the resumption token issue..
05:26 cait         mtj: but local database updates are evil...;)
05:26 cait         dcook: yay!
05:30 mtj          sure, but also unavoidable cait
05:30 cait         hm it depends
05:31 * dcook      tries to avoid local database changes these days
05:31 cait         we have been able to avoid it so far - but we are very cautious about local changes
05:31 dcook        Most of the time these days, I tell people that I'm happy to make changes to the database, but only after they've been vetted and pushed by the community first
05:32 dcook        Not always possible though
05:34 cait         args.
05:34 cait         i really have to clean up my addressbook
05:35 cait         [off] me still has limblime addresses in there for some people... email.
05:37 cait         [off] all gone... i hope
05:39 mtj          so, are any koha devs/companies actually using some system for 'local' version numbers for DB updates?
05:40 cait         we don't
05:40 cait         and i have to run and import patron data :)
05:40 cait         bbl
05:41 dcook        mtj: Maybe catalyst?
05:42 dcook        We track our db changes separately
05:49 mtj          theres a bug in the TransformToNum() sub, that stops 'local' versions from being evaluated correctly
05:50 mtj          # remove the 3 last . to have a Perl number
05:50 mtj          $version =~ s/(.*\..*)\.(.*)\.(.*)/$1$2$3/;
05:51 mtj          ….and its been about since version 3.00.00 :p
05:53 dcook        :/
05:55 dcook        bug 10974
05:55 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10974 normal, P5 - low, ---, dcook, Needs Signoff , OAI-PMH Resumption Tokens Do Not Handle Time
05:55 dcook        If anyone wants a super easy sign off...
06:02 * mtompset   grumbles about reindexing on 512MB of memory with no swap space and 53K+ records failing because of some malformed tag, and requiring using a script that inserts everything into the zebraqueue.
06:05 dcook        Doesn't sound like fun, mtompset
06:05 dcook        Although you'll be glad when you've cleaned up the marcxml :)
06:05 mtompset     It's not.
06:05 mtompset     Here's the thing.
06:06 mtompset     It all indexes fine running this script.
06:06 mtompset     It does not reindex nicely otherwise.
06:06 mtompset     I should, at some point in time, try to determine the exact record it is barfing on.
06:06 * dcook      nods
06:07 dcook        Especially if you're using the XSLTs
06:07 dcook        I've had sneaky bad bibs get in and it's not pretty
06:26 * magnuse    waves
06:29 dcook        hey ya magnuse
06:31 magnuse      hiya dcook
06:34 Oak          magnuse
06:34 Oak          kia ora #koha
06:34 dcook        hey Oak
06:37 magnuse      Oak
06:37 * magnuse    is too slow
06:37 Oak          :)
06:37 Oak          hey dcook
06:41 marcelr      hi #koha
06:41 marcelr      kf++ #qa
06:42 Oak          kf++
06:42 magnuse      kf++
06:44 * dcook      succumbs to peer pressure
06:44 dcook        kf++
06:45 magnuse      hehe
06:46 marcelr      hi magnuse dcook and oak
06:46 dcook        hey ya marcelr :)
06:47 magnuse      hiya marcelr
06:48 Oak          Ahoy marcelr
06:48 gaetan_B     hello
06:48 wahanui      hi, gaetan_B
06:48 Oak          \o
06:57 magnuse      tee hee http://www.nerdmeritbadges.com/
06:57 magnuse      @wunder boo
06:57 huginn       magnuse: The current temperature in Bodo, Norway is 8.0°C (8:50 AM CEST on September 30, 2013). Conditions: Mostly Cloudy. Humidity: 66%. Dew Point: 2.0°C. Windchill: 4.0°C. Pressure: 30.12 in 1020 hPa (Steady).
06:57 magnuse      @wunder marseille
06:57 huginn       magnuse: The current temperature in Marseille, France is 16.0°C (8:30 AM CEST on September 30, 2013). Conditions: Fog. Humidity: 94%. Dew Point: 15.0°C. Pressure: 29.65 in 1004 hPa (Steady).
06:58 j_wright     what's the state of Pg support in Koha?
06:58 magnuse      j_wright: there is none
06:58 rangi        not entirely true
06:59 magnuse      oops, listen to rangi, not me
06:59 rangi        with bug 8798
06:59 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=8798 enhancement, P3, ---, elliott, In Discussion , Add the use of DBIx::Class
06:59 rangi        you can deploy to postgresql, and it mostly works
07:00 rangi        once thats pushed we can clean out the last of the mysqlisms, there are very few left
07:00 dcook        \o/
07:00 rangi        it currently works fine on mariadb too
07:00 magnuse      \o/
07:00 rangi        pg support is nice, but its not a massive priority
07:01 j_wright     well clearing out the mysqlisms would enable more than Pg support
07:04 rangi        yep, there arent many
07:04 rangi        but again, its not a massive priority
07:04 * dcook      wonders what the massive priorities are
07:05 dcook        Not that I disagree. I'm just curious :p
07:05 magnuse      plack
07:05 magnuse      dbic
07:05 magnuse      stable master
07:05 rangi        yep
07:05 magnuse      is what i'd say ;-)
07:06 rangi        id agree with that
07:06 magnuse      yay! :-)
07:06 j_wright     would be sad to be using DBIC and still be Mysql-bound
07:07 rangi        we are transitioning to maria
07:08 rangi        if we get pg support as an aside, all well and good, but its not one of my priorities
07:08 rangi        because for the library user, it gains them nothing
07:08 reiveune     hello
07:08 rangi        and that should be our priority
07:08 dcook        ^^
07:08 dcook        rangi++
07:08 samueld      hi everybody
07:08 dcook        hey samueld
07:08 magnuse      bonjour samueld and reiveune
07:09 reiveune     \o/
07:09 rangi        hi samueld and reiveune
07:09 samueld      hi magnuse
07:10 j_wright     pity mariadb doesn't work optimally unless you have ucontext
07:10 samueld      is it normal that when i create a categorycode in koha, it is written automatically in upper case instead of lower case?
07:12 rangi        yes, its supposed to be samueld
07:13 magnuse      there is some js that transforms it to uppercase automagically, i think
07:13 samueld      example, i write 'employee" and it is transformed in "EMPLOYEE", that's for why i've some problems to connect koha to the ldap
07:14 magnuse      ouch
07:15 magnuse      yeah, i was just thinking a couple days ago that forcing uppercase seemed a little heavy handed
07:15 samueld      is it a bug?
07:17 dcook        Time for me to run. Night/day all.
07:17 magnuse      samueld: i would think the js was implemented as a feature, not by accident ;-)
07:17 samueld      ok, so, it can be changed
07:41 magnuse      cait++ bgkriegel++ for bug 10965 and bug 10969
07:41 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10965 major, P5 - low, ---, bgkriegel, Passed QA , Sample itemtypes can't load on new install
07:41 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10969 normal, P5 - low, ---, katrin.fischer, Signed Off , Fix sample itemtypes for translated installers
07:41 magnuse      mysqlf-- for breaking sample item types
07:41 magnuse      gah
07:41 magnuse      myself--
07:44 marcelr      interesting typo, magnuse: no fan of mysql ?
07:44 marcelr      )
07:44 marcelr      :)
07:44 magnuse      :-)
07:45 magnuse      mysqld is not too bad, but i positively hate mysqlf ;-)
11:07 paxed        any dev got some time to help me resolve a problem? after install, can't go to maintenance.pl because i get "Can't use an undefined value as an ARRAY reference at /usr/local/lib/perl/5.14.2/DBI.pm line 2058."
11:19 paxed        ergh. right, forgot port 8080, derp.
11:20 paxed        that should really complain something else if the db is "missing" ...)
12:17 * magnuse    waves again
12:20 oleonard     Hi #koha
12:20 magnuse      kia ora oleonard
12:25 magnuse      ooh, shiny new buttons on the cataloguing page!
12:37 magnuse      is it just me, or is marcflavour missing from the new, alphabetically sorted sysprefs.sql?
12:38 marcelr      magnuse: isn't that a specific choice in webinstaller?
12:39 marcelr      it should not be in that file
12:39 magnuse      ah yes, that could be it
12:40 magnuse      thanks marcelr
12:42 magnuse      but at least marcflavour will never be "usmarc", right?
12:42 marcelr      marc21
12:44 marcelr      uppercase
12:44 marcelr      normarc is one of the big three :)
12:46 magnuse      w00t!
12:46 marcelr      holmarc never made it ;)
12:47 magnuse      aww...
12:47 magnuse      Bug 10975
12:47 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10975 normal, P5 - low, ---, gmcharlt, NEW , addbiblio.pl checks if marcflavour equals "usmarc"
12:50 marcelr      somewhere a title is not filled in the template
12:50 marcelr      good catch
12:50 marcelr      apparently, it does not hurt much
12:51 magnuse      been like that since 2007, i think :-)
12:54 marcelr      magnuse: the situation seems to corrected by a later line: $template->param( title => $record->title() ) if ( $record ne "-1" );
12:54 marcelr      in the code more below the first line
12:55 marcelr      without the usmarc check
12:55 kivilahtio__ Hi I was migrating items to Koha and realized that the itemnumber was not preserved. I used C4::Items::_koha_new_item() to migrate items. Would it be acceptable if I extended that function to push the itemnumber to DB as well if it is present?
12:55 kivilahtio__ just installing my git environment and figuring stuff out
12:56 marcelr      magnuse: no sorry there is an exit in between
12:56 kivilahtio__ or maybe there is a more stylish alternative for migrating items using the Koha API?
12:56 marcelr      so in a specific case it should not appear
12:57 magnuse      kivilahtio__: why would you need or want to preserve the itemnumbers?
12:57 kivilahtio__ magnuse: so we can target our circulations and holds to our items
12:58 kivilahtio__ and fines
12:58 kivilahtio__ we migrate it all
12:59 magnuse      kivilahtio__: use barcodes for the matching instead?
12:59 kivilahtio__ magnuse: our barcodes are not 100% reliable
12:59 magnuse      marcelr: it's all yours if you want to fix it ;-)
12:59 kivilahtio__ magnuse: but that is a good idea
13:00 marcelr      magnuse: after you please..  i have so much to fix already :)
13:00 magnuse      marcelr: me too :-/
13:07 kf           hm
13:08 kf           not sure if the webinstaller creates the syspref or only updates it
13:08 marcelr      hi kf
13:09 marcelr      $installer->set_marcflavour_syspref($marcflavour); ?
13:09 kf           guess you checked :)
13:09 kf           sorry was afk until now
13:09 kf           and hi marcelr
13:10 marcelr      my $request =      $self->{'dbh'}->prepare(          "INSERT IGNORE INTO `systempreferences` (variable,value,explanation,options,type) VALUES('marcflavour','$marc_cleaned','Define global MARC flavor (MARC21, UNIMARC or NORMARC) used for character encoding','MARC21|UNIMARC|NORMARC','Choice');"        );
13:11 marcelr      kf: i did now
13:11 marcelr      code from Installer.pm
13:11 marcelr      so that is fine
13:14 kf           yep
13:15 magnuse      yay!
13:15 magnuse      in other news, i would hesitate to call addbiblio.pl "fine"...
13:16 marcelr      fine finer finest
13:21 marcelr      cya
13:30 mtompset     12 hours reindexing... and still not done. Oh the pain!
13:34 smeagol      Hi all, does anyone know if localuse syspref counts items that are checked in to koha to trigger a HOLD to waiting status? thanks, as always
13:51 Barrc        Hey all - are odd Koha version numbers still considered unstable/dev releases? I ask cause I just ran a package install and then koha-install-log say "KOHA_INSTALLED_VERSION=3.12.05.000"
13:53 jcamins      Barrc: the second number is the one that marks stable/development.
13:53 Barrc        Of course it does........half asleep here, thanks!
14:26 banana       wizzyrea++ -> "we would only include other Open Source projects"
14:31 kivilahtio__ hey I cannot duplicate the field 001 in "Administration" -> "Koha to Marc mapping". I need to explicitly define the field 001 to biblio.biblionumber, biblioitem.biblionumber and biblioitem.biblioitemnumber but I can apparently only assign it once
14:31 kivilahtio__ I would like to preserve our old database id sitting in $001
14:33 jcamins      kivilahtio__: you should not be mapping biblioitem.biblionumber, and biblioitemnumber != biblioitem.biblionumber
14:34 kivilahtio__ jcamins: ERROR: Adding biblio failed: No biblionumber tag for framework ""
14:35 jcamins      Personally, I wouldn't do that.
14:35 kivilahtio__ jcamins: hmm, I thought I understood that much from reading the bulmarcimport.pl code
14:35 jcamins      You're also going to have to use a custom Zebra configuration.
14:35 kivilahtio__ default id is in $999, but we have it in $001
14:35 kivilahtio__ jcamins: wow I don't want to do that :D
14:36 jcamins      That's what I thought. That's why I leave biblionumber in 999$c and biblioitemnumber in 999$d.
14:36 kivilahtio__ so if I want to keep our existing dabase id's I just need to sucker it up and migrate them in the $999?
14:36 rambuten     jcamins: can you kick rambutan for me?
14:36 jcamins      You can try that. I'm actually not sure how well it will work.
14:36 jcamins      Done.
14:37 rambuten     strangely it won't let me assume that nick now
14:37 kivilahtio__ rambuten: maybe it is still reserved?
14:37 kivilahtio__ rambuten: maybe wait a bit?
14:37 jcamins      rambuten: rambutan may not have left the network yet.
14:37 jcamins      rambuten: if it's registered, you can use /ghost rambutan, I think.
14:37 rambuten     somebody has copyrighted it, but they never seem to be online
14:37 jcamins      Oh.
14:38 rambuten     I never have problems getting the nick, but it's no big deal
14:40 rambuten     jcamins: you didn't kick ban, did you?
14:40 kf           kivilahtio__: messing with the ids will bring you all kinds of headache later
14:40 jcamins      rambuten: nope.
14:41 kf           kivilahtio__: we never preserve old ids in migrations, but uor 001 are unique from a union catalog. We only store the ids from former systems in some fields so looking them up would be possible
14:43 kivilahtio__ kf: lol I cant set my mappings back to the defaults! Ok I'll try the $999 route
14:43 kivilahtio__ but this might be really hard when merging biblios and items
14:44 kivilahtio__ I just save my new mappings but nothing happens...
14:44 jcamins      kivilahtio__: how often do you merge bibs and items?
14:45 kivilahtio__ jcamins: well it takes time to rewite the scripts. I was hoping to reuse some Koha parts which use the biblionumber.
14:45 kivilahtio__ jcamins: ok I just realized my logical mistake
14:47 kivilahtio__ bah ui misunderstanding
15:12 rambuten     didn't the old koha-community have a search field on the main page
15:12 rambuten     ?
15:12 jcamins      I think so, yeah.
15:14 rambuten     kinda annoying to have to do a google search to find something on a given site.
15:14 jcamins      We could probably add it back below "Community Resources."
15:15 rambuten     apparently no presentation schedule yet?
15:16 jcamins      For the conference?
15:16 jcamins      There is.
15:17 rambuten     it's not exactly jumping out at me on any of the pages I see
15:17 jcamins      Yeah, the KohaCon13 pages are kind of unusable. There's one somewhere, though.
15:18 rambuten     http://wiki.koha-community.org/wiki/KohaCon13_Program
15:18 jcamins      That's the one.
15:19 kf           hackfest?
15:19 wahanui      hackfest is awesome! Group motivated koha hacking is the best koha hacking :)
15:19 kf           I will try to send out an email about the hackfest planning tomorrow or so I think
15:19 kf           hackfest13 is http://wiki.koha-community.org/wiki/Kohacon13/Hackfest
15:19 kf           hackfest13?
15:19 wahanui      hackfest13 is http://wiki.koha-community.org/wiki/Kohacon13/Hackfest
15:20 kf           jcamins: i think oyu are around the first day of the hackfest?
15:20 jcamins      I am, yes.
15:20 kf           :)
15:20 kf           you could talk about the awesome query parser
15:21 kf           or...
15:21 rambuten     I'm gratified to see that coffee break sessions are will represented. I wonder if I could offer to bring/have purchased some better-than-your-average-bear coffee, like Eight O' Clock
15:21 rambuten     will/well
15:31 jeff         jcamins: Would you consider yourself a good person whose brain I could pick about Koha and Plack?
15:31 kivilahtio__ jeff :)
15:31 jeff         jcamins: for starters, the "coding for plack" idea on the hackfest wiki page has me wondering what the current state of plack is in koha?
15:32 jeff         kivilahtio__: greetings!
15:35 jcamins      jeff: probably one of the best you're going to get, anyway.
15:35 rambuten     no speed talks @kohacon13 ?
15:36 jcamins      jeff: most things work under Plack.
15:36 jeff         jcamins: http://wiki.koha-community.org/wiki/Plack and the referenced bug 7172 both seem somewhat dormant. Is that because the work's done, or because there hasn't been much progress to note lately?
15:36 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=7172 major, P1 - high, ---, paul.poulain, NEW , Omnibus for Plack variable scoping problems
15:37 jcamins      At this point I think we've addressed all the variable scoping issues, mostly with terrible bandaids.
15:39 jcamins      The only things that absolutely don't work under Plack are database export, and the progress bar for importing records.
15:41 jcamins      However, Koha under Plack is still a bit brittle.
15:46 jeff         jcamins: I understand that Plack has support for running (most, with some assumptions/fixes) CGI-style scripts under PSGI using Plack::App::CGIBin. Are you aware of anything that does the reverse -- takes a script that speaks PSGI and allows it to run under a CGI environment?
15:46 jeff         jcamins: That last one isn't Koha-specific at all, sorry. :-)
15:49 jcamins      jeff: I am not.
15:50 kf           bye all
15:50 jeff         Thanks. It was worth a shot. I had this dream of writing a script that talks PSGI and then in the README says "if you must run this under a CGI environment, do XYZ"
15:51 jcamins      There may be something.
15:51 jcamins      I know very little about native-PSGI applications.
15:52 kivilahtio__ hey there is something wrong with the bulkmarcimport.pl script. There seems to be no way of pushing the old biblio id to the biblionumber! The marc mappings don't do anything since the database accessor functions don't even touch the biblionumber column! in C4::Biblio::_koha_add_biblio()
15:53 kivilahtio__ biblionumber is picked fine up until that function, where nothing is done upon the biblionumber? I mean there should be a way to map the biblionumber?
15:54 kivilahtio__ anyway, I'm going home
15:54 kivilahtio__ Issues is that the biblionumber is correctly brought up to the database accessor function but it is not stored in the DB, but a new one is generated from the database primary key sequence
15:55 kf           kivilahtio__: there is probably not
15:55 j_wright     jeff: Plack::Handler::CGI
15:56 kf           kivilahtio__: koha gives the bilbionumber, it's not supposed that you bring your own
15:56 kf           kivilahtio__: it's not made to do what you want
15:56 kivilahtio__ kf: But that is what the bulkmarcimport uses from "Koha to MARC mapping"
15:56 kf           the 999 are special
15:57 kf           see bug 6113
15:57 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=6113 enhancement, P5 - low, ---, gmcharlt, ASSIGNED , enhancement to keep previous ids
15:57 kf           i am still not sure it's a good idea
15:58 kivilahtio__ kf: if I can't have a biblionumber set in biblio-table, how can I link my items, when they have the biblionumber in them pointing to the parent record?
15:58 j_wright     jeff: XYZ is, rename app.psgi to app.cgi, change the shebang line to #!/usr/bin/env plackup or the three lines from the POD
15:58 jcamins      kivilahtio__: what do you mean by "link your items"?
15:59 jcamins      That sounds like a one-time task.
15:59 kf           you can have a biblionumber, of course you need one, but we let koha set them.
15:59 kivilahtio__ kf: I cant migrate holdings with the marc records
15:59 kivilahtio__ kf: just too much reworking our migration tool
15:59 kf           you could write a script that looked up the number
15:59 kf           that#s what we do
15:59 kf           we store the former id in a custom field and then look it up
16:00 kf           well for other migrations, for items + biblio we attach the items as 952
16:00 kf           and import them with the marc records
16:00 jcamins      I'd prepare MARC records that included the items prior to the import.
16:01 kivilahtio__ jcamins: well we have component parts with links from 773w -> 001 of the parent
16:01 kivilahtio__ we cant change the database id
16:01 kf           and if you use an sql script... i would carry the number you need for linking in another custom field and adapt the sql accordingly
16:01 kf           kivilahtio__: 001 has nothing to do with biblionumber
16:01 jcamins      773$w->001 doesn't use biblionumber.
16:01 kivilahtio__ but our legacy data has that linkage
16:02 jcamins      Right, and that's not biblionumber.
16:02 kf           we do exactly that in koha
16:02 kivilahtio__ jcamins: so where does 773w point in koha?
16:02 kf           linkings from 001 to $w subfields
16:02 jcamins      The 001 field.
16:02 kf           the 001 of the host record
16:02 kf           which is not the bilbionumber
16:02 jcamins      Remember, Koha and EG treat records rather differently.
16:02 kivilahtio__ kf: but why cant they be the same? they are bu definition the same thing?
16:02 kivilahtio__ jcamins: I hope I understood that better
16:03 kf           [off] https://hfjs.bsz-bw.de/cgi-bin/koha/opac-search.pl?idx=kw&op=and&idx=kw&op=and&idx=kw&limit=bib-level%3Aa&sort_by=relevance&do=Suche
16:03 jcamins      Whereas EG does a lot at the database level, Koha's bibliographic relationships are controlled only at the MARC level, in Zebra.
16:03 kf           kivilahtio__: I don't think there is a definitoin that your internal record number has to match the controlnumber
16:04 kf           and they could be the same thing, but it doesn't matter for yourl inking, for your linking it's only important that you keep your 001 and $w subfields intact
16:04 kivilahtio__ kf: ok
16:04 kf           kivilahtio__: the link above is a result list of articles
16:04 kf           as an example
16:05 kivilahtio__ kf: but what is the handicap for using our legacy id's? I have a solution in mind for that already and I think I could du that bug which was posted
16:05 kivilahtio__ programmatically it is not a problem, but I feel there are some sinister drawbacks to not using the in-DB primary key sequence?
16:05 kf           what happens if koha assigns the next number?
16:06 kf           are your legacy ids strictly numeric?
16:06 kivilahtio__ kf: I cant link holdins records to the bibliographic record
16:06 kivilahtio__ kf: yes
16:06 kivilahtio__ starting from 1001 ->
16:06 kivilahtio__ well within the databse id range
16:06 kf           some people have voted for that bug - i am not a fan, but i just think meddling with the numbes is prone to create headache later
16:06 jeff         j_wright++ -- thanks! (re: Plack pointers)
16:07 kivilahtio__ kf: I am only afraid of zebra
16:07 jcamins      I think making it possible to keep 001 as biblionumber would be nice, but not really necessary.
16:07 kf           hm there is also a bug for that
16:07 kivilahtio__ jcamins: I understand. But for me it is easier to just keep the 001 as a biblionumber :D
16:07 kivilahtio__ and make this odd bugfix while at it
16:07 jcamins      kivilahtio__: it is almost surely not easier.
16:08 kf           bug 9921
16:08 huginn       04Bug http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=9921 enhancement, P5 - low, ---, nunyo, Patch doesn't apply , Make it possible to force 001 = biblionumber
16:08 kivilahtio__ jcamins: so what risks do you see in keeping the 001 matching the biblionumber and itemnumber?
16:08 jcamins      kivilahtio__: nothing, the problem is making the biblionumber match the 001.
16:08 kivilahtio__ jcamins: we use a strict integer, 0<
16:09 jcamins      If you have a single bad record, a single inconsistency, you are likely to trash your database.
16:09 jcamins      Itemnumber is of course unrelated.
16:09 kivilahtio__ jcamins: why? we have staggered numbers, some records have been deleted on they fly, so our id sequence would be like 1001,1002,1004,1006,1007,1008,1009,1011
16:09 kf           hm sorry have to leave
16:10 kf           bye all
16:10 kivilahtio__ kf: Thakns for your help!
16:10 jcamins      kivilahtio__: what happens when it turns out your previous system actually had a duplicate 001?
16:10 kivilahtio__ jcamins: well the biblionumber is primary key, so it cannot be dubplicated. Mysql deals with it
16:11 kivilahtio__ so the legacy record is not migrated
16:11 kivilahtio__ jcamins: it can be manually migrated later
16:11 kivilahtio__ jcamins: this happens occasionally and I dont see an issue with that
16:12 jcamins      You'd have to make it optional, because data from Voyager, III, and most other ILSes that I have encountered does not consistently have 001 set correctly.
16:12 kivilahtio__ jcamins: or in our case we collect the erroes in a huge log and manually clean up our databsae as we proceed with the migration
16:12 kivilahtio__ jcamins: naturally
16:12 kivilahtio__ jcamins: Ill start working on that patch tomorrow
16:12 jcamins      As I said, I have no objection to you adding this feature, I just think that it's likely to not work nearly as well as you imagine.
16:13 kivilahtio__ jcamins: but it is good to know there are no Zebra related dangers
16:13 kivilahtio__ jcamins: if the only problem is the purity of the 001, then it will work for us
16:13 kivilahtio__ jcamins: And I would rather have this work for us in the master Koha
16:13 kivilahtio__ jcamins: so others could benefit as well
16:15 jcamins      The other danger I can see is primary key exhaustion.
16:15 kivilahtio__ jcamins: Agreed
16:15 jcamins      001 is often an arbitrarily large number.
16:15 kivilahtio__ jcamins: but the primary key can be compressed, it is a big script but doable when the time comes
16:16 kivilahtio__ just need to realign all the foreign key references
16:16 kivilahtio__ and force full reindex of Zebra
16:16 jcamins      So if you import one record that has a 001 of 2147483646, you've just killed your database.
16:16 jcamins      No, I mean in MySQL.
16:16 kivilahtio__ jcamins: we dont have that big id's
16:16 jcamins      You'll want to make *very* sure of that.
16:17 kivilahtio__ jcamins: I am sure Mysql takes care of its boundary values. But I will write a  warning about that to the help
16:17 kivilahtio__ jcamins: do you really think MySQL goes crazy if you push it's integer boundaries?
16:17 jcamins      No.
16:17 jcamins      But Koha will.
16:17 kivilahtio__ jcamins:  ahhh :D
16:18 kivilahtio__ jcamins: well I got your Koha INT_MAX :D
16:41 oleonard     *sigh* That was plenty of meeting.
18:53 cait1        permissions--
19:19 rambuten     Is the NCIP protocol specification finalized?
19:24 jeff         i'm not sure which of the last three words to mockingly place in quotation marks.
19:24 jeff         (sorry, snark is often not helpful)
19:24 jeff         rambuten: there are at least three currently published versions of the NCIP standard, and I suspect you could consider all of them finalized. see http://www.ncip.info/the-standard.html
19:24 rambuten     Yea, I was just looking at that site.
19:24 oleonard     Why do you ask rambuten? RDA is far from final but people are already "using" it
19:24 rambuten     We funding a PAC development, and the dev had some questions.
19:24 rambuten     We're
19:24 oleonard     Can you say more?
19:24 jeff         The other aspect to consider is that the NCIP standard defines a protocol of sorts, which is used to imeplement one or more Application Profiles. Some of those Application Profiles have various "flavors" in the wild, when it comes to interacting with specific software.
19:27 oleonard     Even in the open source world, people are often mum
19:28 jeff         What aspect of NCIP relates to your PAC development (or vice versa)?
19:30 rambuten     Well, the PACs will authenticate to the Koha (and we hope to incorporate/test w/ Evergreen too) db server.
19:30 rambuten     SIP is a given, and probably NCIP will be included too.
19:30 jeff         ah. just to be clear, can you define what you mean by "PAC" in this context?
19:31 rambuten     Public Access Computing - public Internet computers
19:32 jeff         Since both Evergreen and Koha do a good job of speaking SIP2, I'd almost suggest leaving it at that. Is there any particular reason you're thinking about supporting NCIP? Do you want to talk to other things that do talk NCIP and do not talk SIP2?
19:33 rambuten     No, but "more is better", right? And there may be weird things out there that don't come to mind immediately where admins would prefer to use NCIP.
19:33 jcamins      You should ask dyrcona about NCIP.
19:34 rambuten     OK, next time I see him online I'll ping him.
19:34 jcamins      Not to say that jeff doesn't know about NCIP. He might.
19:34 jcamins      But I know dyrcona was working on NCIP a few weeks ago.
19:34 jeff         Keep in mind that neither Evergreen nor Koha currently speak NCIP "out of the box". If you don't have a particular need for it, I'd say leave room/hooks/whatnot for future enhancement, but leave it at that. :-)
19:34 rambuten     Wish KohaCon13 was having speed talks, I'd love to throw up some slides.
19:35 mtompset     rambuten: Sometimes less is more. :)
19:35 jcamins      rambuten: are you staying for the hackfest?
19:35 rambuten     no, sadly
19:35 rambuten     gmcharlt: about? Any NCIP comments?
19:36 cait1        rambuten: why not drop nancy a mail suggesting that?
19:36 rambuten     hummm
19:36 cait1        there might be some changes in the program last minute, you neve rknow
19:36 gmcharlt     rambuten: briefly, because I'm in the weeds at the moment, you might want to consider joining http://lists.mvlc.org/listinfo/ncip-dev
19:36 jeff         There is an implementation of an NCIP 1.0 responder (intended for use with Evergreen and III's INN-REACH) here: https://github.com/jonscott/iNCIPit (with at least one more up to date fork available) and I believe Dyrcona and rangi and possibly others are collaborating on code here: http://git.evergreen-ils.org/?p=working/NCIPServer.git;a=summary
19:37 jeff         there's a developers-only list at the url gmcharlt posted (though others can supposedly lurk, as i understand it), and a document describing it at...
19:38 jeff         ah, here: NCIP Responder Design: https://docs.google.com/document/d/1NuV47145SnEV1f-9BrY2d9jpX8Ij_VxK2a51lotgSsU/edit
19:40 * jeff       sends sympathy gmcharlt's way -- good luck in the weeds
19:41 gaetan_B     bye !
19:43 * jeff       returns to making iNCIPit.cgi more useful
19:52 mtj          jeff, have you spotted rangi's ncip repo on github? -> https://github.com/ranginui/ncip-server-chris
19:53 rambuten     Does everybody have an NCIP dev/blog going on?
19:55 jeff         mtj: I hadn't, thanks. It seems to be a currently outdated additional remote for the repo I linked earlier, http://git.evergreen-ils.org/?p=working/NCIPServer.git;a=summary
19:56 mtj          ah, ok… i just spotted it on #koha yesterday :)
19:56 * jeff       stars/watches
19:56 jeff         mtj: thanks. :-)
20:32 rangi        jeff: thats just a mirror
20:32 rangi        http://git.evergreen-ils.org/?p=working/NCIPServer.git
20:32 rangi        its a collab between koha and evergreen people
20:33 cait1        very cool :)
20:37 rangi        my repo runs a little ahead, then gets merged
20:38 rangi        but its getting there
20:38 jeff         rangi: thanks for the clarification. that's what it looked like, but i wasn't certain.
20:39 jeff         well, i suppose "wasn't certain" isn't entirely accurate, but I'm not going to worry about being that precise. :P
20:41 jcamins      Ooh, good news everyone! My library school has just added a course on Dialog to their schedule for the fall!
20:41 cait1        wow
20:41 cait1        was dialog that query langauge for expensive databases?
20:41 rangi        ?
20:42 * cait1      should just look it up before asking stupid questions
20:42 jcamins      Dialog _is_ the expensive database.
20:42 jcamins      I only know of one thing that it is still used for regularly: citation searches for tenure applications.
20:43 jcamins      Spending a small fortune per search somehow doesn't appeal to most people.
20:43 cait1        oh
20:43 jcamins      Weird, eh?
20:43 cait1        it just sounded vaguely familiar
20:43 cait1        possible other databases use their syntax or something?
20:44 cait1        i remember carefully preparing searched than trying to be done really really fast with less possible searches from library school
20:44 cait1        searches
20:44 wahanui      searches are rendered client-side, particularly if you click the knowledge trail maps on the right.
20:44 jcamins      Yeah, that's Dialog.
20:44 cait1        searches then... typos... :(
20:44 cait1        glad you still got it
20:44 cait1        :)
20:58 magnuse      rangi: did you ditch Dancer for NCIP, or is that another piece of the puzzle?
20:58 rambuten     http://www.wired.com/wiredenterprise/2013/09/gendarmerie_linux/
21:11 eythian      hi
21:11 wahanui      que tal, eythian
21:13 rangi        its another piece of the puzzle
21:14 rangi        so, there is the NCIPServer.pm
21:14 rangi        which is called by bin/start_server.pl
21:14 rangi        that runs raw
21:14 rangi        then NCIPResponder.pm
21:14 rangi        is for the mod_perl implementation
21:16 rangi        NCIP needs to do both raw and http(s) to be a full implementation
21:16 rangi        im going to do a dancer wrapper too
21:17 wizzyrea     the authority magnifying glasses - can you turn those off?
21:17 cait1        css
21:17 wizzyrea     ok cool ty
21:17 cait1        :)
21:18 wizzyrea     didn't know if that was a syspref or a hack your way around it thing./
21:20 cait1        hm i think the css here is intended
21:20 cait1        for that purpose
21:20 cait1        instead of adding another pref
21:20 wizzyrea     yep that's fine
21:20 wizzyrea     Just needed to know :)
21:20 cait1        so... does someone know why i have no log files appearing in my koha-dev/var/logs? :(
21:20 wizzyrea     gitified?
21:21 cait1        no, stlil haven't got around to try that
21:21 wizzyrea     bc if it's a gitified dev environment the logs will be where the packages keep them.
21:24 cait1        rangi++
21:24 cait1        restartedapache, now it works
21:28 cait1        anyone feeling like they really want to do me a big favour? :)
21:28 cait1        and order and receive an something on current master?
21:30 cait1        eythian: guess now you will have to come back here and buy him a beer? ;)
21:31 eythian      cait1: guess I will :)
21:31 cait1        :)
21:43 bag          manual?
21:43 wahanui      manual is, like, at http://www.koha-community.org/documentation
21:43 bag          anybody able to get that to load for them ^^^^
21:44 jcamins      @isitdown koha-community.org
21:44 huginn       jcamins: I'll give you the answer just as soon as RDA is ready
21:45 eythian      bag: not working for me
21:45 jcamins      bag: Looks like it's down.
21:45 cait1        hi bag
21:45 bag          thanks for me too
21:45 eythian      wizzyrea: is that something you can poke at?
21:45 wizzyrea     yep
21:48 bag          hey cait
21:53 rangi        gmcharlt: you around?
21:59 wizzyrea     that should be back
22:01 rambuten     gmcharlt is in the weeds
22:01 wizzyrea     oh look you're all checking to see if I'm telling the truth >.>
22:02 rambuten     http://www.koha-community.org/documentation  <- works for me
22:19 wizzyrea     Someone just slamming it with Joomla attacks. Silly people, that won't work it's wordpress. :P
22:22 wizzyrea     well it obviously *did* work as a denial of service heh.
22:33 rambuten     koha-community you mean?
22:34 wizzyrea     yep
22:36 bag          thanks wizzyrea
23:29 bag          @later tell cait - http://bugs.koha-community.org/bugzilla3/show_bug.cgi?id=10486  updates added like requested :)
23:29 huginn       bag: The operation succeeded.
23:29 huginn       04Bug 10486: new feature, P5 - low, ---, jweaver, Needs Signoff , Allow external Z39.50 targets to be searched from the OPAC
23:42 dcook        Hmm, anyone have any back-ups of the LoC website? :p
23:43 jcamins      dcook: yeah, see code4lib. :)
23:43 jcamins      Well, except for Z39.50. You're SOL there.
23:44 dcook        Where am I looking for code4lib?
23:44 jcamins      The mailing list.
23:44 wahanui      the mailing list is at http://koha-community.org/support/koha-mailing-lists/
23:45 jcamins      http://stuff.coffeecode.net/www.loc.gov/marc/
23:46 dcook        Intriguing
23:46 dcook        Thanks for the link
23:46 dcook        I actually don't see anything in the mailing list. Hmm. *shrug*
23:46 dcook        Well, except the system halt message
23:47 jcamins      It's the last message in the HEADS UP thread.
23:47 dcook        Ah, it probably just hasn't come in my mailbox yet
23:48 dcook        As with all things, I get the digests
23:58 dcook        "The Evergreen and Koha integrated library systems now express their record details in the schema.org vocabulary out of the box using RDFa. "
23:58 dcook        Ummm, what?
23:58 dcook        I don't recall seeing RDFa anywhere in Koha...
23:59 jcamins      Run a detail record through Google's rich snippets tool.