IRC log for #koha, 2008-08-28

All times shown according to UTC.

Time S Nick Message
12:25 danny morning #koha
12:30 mc afternoon, danny
12:33 mason hey guys
12:33 mason im up a bit late doing taxes, lol..
12:34 mc :)
13:20 hdl acmoore++ for #kohanews
13:21 acmoore glad you like it!
13:26 rhcl #kohanews is a channel? On this server?
13:32 acmoore it's on the freenode network. you can use the server if you like.
13:33 acmoore you can read more about it at[…]-out-in-kohanews/
13:34 acmoore There's a bot in there that announces things like changes to bugs, or stuff checked in to git, and a few other types of things.
13:35 acmoore see also:[…]anews_irc_channel
13:52 ghatt bonjour à tous
14:12 gmcharlt hi
14:12 mc ghatt, this is the english chan ..
16:02 frederic hi
16:40 MikeJones Well, almost ready to install Koha, going through making sure all the dependencies are met, however I'm getting an error trying to install Text::Iconv
16:40 MikeJones it fails when checking for iconv, which is installed in /usr/bin
16:41 MikeJones i know it's not Koha specific, but I was wondering if anyone else ran into this issue?
17:44 gmcharlt owen: ouch
17:50 atz incredible
17:50 atz how does redhat just sit on it for 9 months?
17:52 atz gmcharlt: "Note that the default Perl on all Red Hat-based Amazon EC2 VMIs has this bug"
17:53 atz thankfully we're not on those.
17:53 gmcharlt atz: agreed
17:53 atz staggering to think that you might be incurring CPU charges based on this
17:59 owen "email" is outnumbering "e-mail" in the templates, so I guess we'll go with "email." anyone have an opinion?
17:59 atz screw hypens
17:59 atz er... hypHens
18:08 acmoore http://www.chicagomanualofstyl[…]esEmDashes09.html
18:08 acmoore They use 'e-mail' all over the Chcago manual of style.
18:10 atz acmoore: is that an argument for or against? :)
18:11 atz yeah, otherwise you might be calling a program "e" with the -mail option
18:18 ryan in fact, i think i prefer just 'mail'.
18:18 owen I agree--where you can get away with it. Can't always, though.
19:44 liz quick question... is there somewhere online where all of the template tags are documented? Did I just miss it?
19:45 atz liz: you mean HTML::Template::Pro tags?
19:45 liz right
19:45 atz yes, 1 sec
19:46 liz atz: thanks
19:47 atz[…]mplate/SYNTAX.pod
19:47 owen Anyone know why links to are passing title and author as well as biblionumber?
19:48 atz liz: quick advisory -- EXPR is to be avoided when possible
19:48 liz 2nd question, is there a place where the koha variables are listed as well?
19:48 liz done and done
19:49 atz liz: "the koha variables" ?
19:49 atz do you mean system preferences?
19:49 owen Or do you mean variables available to the template?
19:49 atz otherwise, no... everything is specific to the script being run
19:49 liz no,... yes, the variables available to the template
19:49 gmcharlt owen: dead weight of history. does nothing with those
19:50 owen thanks gmcharlt, I'll correct the links
19:50 liz sorry, i'm still learning what's what with html::template
19:50 gmcharlt owen: while you're at it, please take out those two corresponding lines in
19:50 atz liz: a lot of them get loaded by (so are usually available)
19:51 owen gmcharlt: I will
19:51 atz the rest you have to look at the script for a line like  $template->param(your_variable_in_question => 1);
19:51 gmcharlt owen: thanks
19:51 liz sadly I don't have access to most of the guts of koha: we are hosted
19:52 liz thanks for the tips... I'll go take a look
20:13 Sharon how hard would it be to change the holdings table on the page to show the permanent location instead of the current location, since the current location is given in the status?
20:19 chris morning
20:20 gmcharlt hi chris
20:22 owen is made obsolete by
20:22 chris pretty much yeah
20:27 owen The "total checkouts" number on seems to be inaccurate. Is that just a quirk of the sample data?
20:28 chris i think that gets it from the count on the item table, so yes it could quite likely be out
20:50 chris lol
20:58 chris i think tina's computer may have narcolepsy
23:56 pianohacker The zebraqueue daemon has been dropped, yes?
23:57 mason yeah, i think so
23:57 pianohacker Any particular reason? It seems like a good idea (in theory)
23:57 mason or at least i recall it about to be dropped  a few weeks ago??
23:58 mason as a cron was the subst. i think
23:58 pianohacker Yah
23:58 pianohacker kados? (I think you're the zebra guru)
23:59 pianohacker Interesting, INSTALL.debian still references it
00:00 mason poss. has had some recent improvements, that people cant justify backporting to the daemon
00:00 pianohacker Seems like it
00:00 pianohacker I got a unknown record type: grs.xml when I tried it myself
00:00 mason if they do the same thing , then just keep one tool, and make it  a good one
00:01 mason KOHA_CONF set correctly?
00:01 pianohacker Yah
00:01 pianohacker The grs.xml was for zebraqueue-d
00:09 chris the only problem with the cronjob
00:10 chris is running it faster than every minute :)
00:10 pianohacker Yah
00:10 pianohacker Hmm
00:10 chris thats why i wrote the daemon
00:10 pianohacker Or getting to run at all
00:10 chris as a proof of concept
00:11 pianohacker My cronjob zebraqueue is exporting the records but not actually indexing them :P
00:11 chris theres probably nicer ways to do it, but i still think a daemon is the way to go
00:11 pianohacker *cronjob rebuild_zebra
00:12 pianohacker Might be worth moving the zebra{,queue} management code into a C4 module; thus rebuild_zebra and zebraqueue-d can do different things without too much duplication of code
00:12 chris yep
00:13 chris and not using POE maybe
00:13 pianohacker Any particular reason besides heavy dependency?
00:13 chris it seems to sometimes flip out and eat memory
00:14 chris i used POE, because id used it in the past for things like and was familair with tit
00:14 chris but now i like the look of Proc::Daemon
00:14 pianohacker Ahh
00:15 chris or even Proc::Application::Daemon
00:15 chris lots of options :)
00:16 pianohacker Ahh, CPAn
00:19 mason yesss, the index-daemon was orig. written to get around the 1 minute index lag - than the cron method had
00:19 mason s/than/that/
00:20 pianohacker See, if we wanted to get really crazy (you should all run away now)
00:20 mason as many catalogers wanted to view the record after they had cat-ed it, and had to wait 60 seconds to do that..
00:21 pianohacker We could rewrite zebraqueue-daemon to accept connections from koha that told it to update records
00:21 pianohacker And make zebraqueue a couldn't-connect-backup that that daemon (on occasion and startup) and rebuild_zebra -z would check
00:22 chris yep
00:22 chris just have it listen on a socket
00:23 pianohacker Yah
00:23 chris thats how my cafenet daemon works, it responds to login requests from a webserver
00:23 chris that interrupts
00:23 chris the rest of the time it logs traffic counts to a db
00:55 ryan pianohacker: zebraqueue daemon can only update one record at a time, so if every circ event causes a record to be reindexed, you outpace the max update speed of zq daemon pretty quickly.
00:55 ryan the cron script can reindex many records at once.
00:55 chris surely the daemon could do that too
00:56 ryan well, the cron script calls zebraidx.
00:56 chris yep
00:56 ryan i think the api for ZOOM only supports extended services updates
00:56 chris the daemon could do that too?
00:57 ryan yeah, i guess so.
00:58 ryan shouldn't take too much to do it, but the cron script is working reasonably well for us .
00:58 chris yep, always gonna have that 1 min lag
00:58 chris tho
00:58 chris well potential for it
00:59 ryan we don't really go faster than 5 minutes in practice.  
00:59 ryan most people aren't too put out by the lag
00:59 chris yeah, some ppl are grizzling on the list tho, that their cataloguers want to see it in the results 'now'
01:00 chris id probably tell them to just deal with it :-)
01:00 ryan :)
01:00 ryan I'm sure it'll get done, just enough higher priority things to keep pushing it back :)
01:00 chris yep
01:29 mason hmm, i remember the index-daemon thing being written primarily coz clients were vocal about the lag-issue being a pain.
01:29 mason which is something we may have forgotten when rolling back to the cron method
01:30 chris well, i cant remember why i wrote it originally, i think because i could
01:30 mason ah, the horse... ;)
01:31 mason sounds like the solution is the nicer index features of the cron script, combined with the existing daemon code
01:32 mason which could even be a quicky..
03:38 kwak hi, we're using concourse ILS right now and it can export to MARC21. How do I import those records into KOHA?
03:44 chris[…]tage-MARC-Records
04:06 kwak thanks chris
04:08 chris lots of good info in that manual :)
04:08 chris there is also the newbie guide
04:09 chris
04:15 kwak yea, i saw that manual. i just wanted a quick answer :) and i appreciate the newbie. haven't seen that
04:38 kwak How do I add logo with out school name on top of the OPAC page?
04:47 kwak and im deleting libraries (came with installation), but can't delete Franklin, Midway, Pleasant Valley, Springfield
07:25 hdl hi chris
07:26 chris heya hdl
07:27 hdl just seen your discussion about daemon and cron.
07:27 chris ah right
07:27 hdl I favour daemon too because some customers have old time techniques.
07:30 chris *nod*
07:31 hdl But I also think we cannot maintain 2 scripts for updating biblios.
07:32 chris naw, that shouldnt be hard
07:32 chris all the stuff should in a module
07:32 hdl Maybe adding daemon option to rebuild_zebra could be the right thing.
07:32 chris and just the way its called
07:32 chris be in the script
07:33 hdl sthg like
07:33 chris yeah something like that
07:33 hdl in an Engine directory
07:34 hdl quite a heavy change if it goes that far.
09:05 kwak why is it that after i imported a marc21 file, when i check it, it says holdings (0).
09:06 hdl because a your holdings may not be in correct place (952)
09:07 hdl b) you holding maynot have the correct libraries.
10:39 kwak hi hdl? how do I fix this?
11:12 frederic kwak: In Koha > Administration > MARC Framework, take a look at 952 field.
11:12 frederic Look at its subfields. $p is required. Some other also... like location.
11:13 frederic Your MARC21 records need to have data in 952 fields if you want Koha import script to create item records.
11:13 frederic So you may have to create this 952 fields or move another existing 'item' field into 952.

| Channels | #koha index | Today | | Search | Google Search | Plain-Text | plain, newest first | summary