IRC log for #koha, 2009-01-09

All times shown according to UTC.

Time S Nick Message
11:19 nahuel does someone can say me why in http://wiki.koha.org/doku.php?[…]ment:dbrevs:start there is two version "v3.01.00.011" ? and which version I can add? .012 ou .015?
12:23 gmcharlt nahuel: please use .015
13:08 hdl_laptop hi gmcharlt
13:09 hdl_laptop have you had time to take a look at the patches I sent you ?
13:09 hdl_laptop (just For My Information)
13:10 nahuel ok gmcharlt thanks
13:10 nahuel gmcharlt, did you see my new zotero patch ?
13:11 mc hell all
13:11 mc hello all
13:11 mc better
13:11 hdl_laptop yes
13:42 paul_p ;-)
13:42 paul_p hi owen.
13:42 owen Hi paul_p
13:43 owen Today we've got new snow, although not too much
13:43 paul_p last time there was so many snow in Marseille : 22 years ago !
13:43 paul_p I said "once every 10 years", I was under ;-)
13:43 owen Now everyone in Marseille is saying "There is no global warming!"
13:44 paul_p not sure : Marseille ppl are well known to be the most crying ppl in France !
13:44 paul_p we are famous for that and for soccer
13:45 owen :)
13:50 kf it was in german news yesterday - snow in marseille and a short film
13:51 kf :)
13:52 owen kf, you're in Germany?
13:55 kf yes
14:00 kf some snow here, cold but sunny ;)
14:02 owen I don't think we've met before kf, I'm from the Nelsonville Public Library in Ohio, U.S.A.
14:04 kf i know, you are responsible for koha looking so nice
14:07 kf i did some work on german translation and getting deeper in koha everyday now
14:12 owen I'm a little behind on Koha work these days because my library is preparing for its own upgrade to 3.0 (with Liblime's help)
14:14 hdl_laptop hi kf :
14:15 hdl_laptop I am working on 3.0.1
14:18 kf hi hdl
15:29 teelmo hello
15:29 gmcharlt hi teelmo
15:39 teelmo i was trying to install koha earlier today :) didn't quite manage
15:50 nahuel gmcharlt, you have 2 mins ?
15:50 gmcharlt nahuel: for you, yes :)
15:51 nahuel hehe :)
15:51 gmcharlt haven't had a chance to test your COinS patches yet, but later today
15:51 nahuel ok great, but I wanted to talked you about the mail I just send to koha-devel about innodb
15:51 nahuel and the growing disk space used by innodb
15:51 nahuel did someone in liblime had the same problem ?
15:52 gmcharlt I assume we do
15:52 nahuel how did you do ?
15:52 gmcharlt a couple things I can think of right away - enable innodb_file_per_table_feature.
15:52 nahuel to decrease ?
15:52 nahuel hmmm, ok, but it do not delete the data
15:54 gmcharlt right, but with care, would be able to selectively shrink certain tables
15:54 gmcharlt still means production downtime
15:54 gmcharlt another trick is during your production data migration
15:54 gmcharlt after you finish the load, either drop the database and load it from a dump
15:54 gmcharlt or transfer the dump for a temp database used for the migration to the production one
15:55 nahuel yes, but I'm thinking in a client that have a 10 years old installation...
15:55 gmcharlt that way you don't start off with unnecessary space for any temporary stuff created during the migration
15:55 gmcharlt ah
15:55 nahuel or like an university use
15:55 gmcharlt thinking aloud, one possibility might be setting up replication
15:56 nahuel that delete/create thousands of "patrons" each years
15:56 nahuel and all the loans
15:56 gmcharlt then swtich over from master to replica
15:56 nahuel yes it could be a solution, but it's a big problem of innodb...
15:56 gmcharlt minimizes downtime, but I haven't actually tried anything like that
15:57 nahuel Well, another solution for us is to dump/restore the database each upgrade time
15:58 gmcharlt that's no unreasonable for major upgrades
15:58 gmcharlt since you'll have some downtime anyway
15:58 gmcharlt (or if you're doing something clever to not have downtime, it would likely involve having a copy of the database anyway)
16:00 gmcharlt nahuel: cfouts will undoubtedly have better insight
16:01 nahuel it should be a solution for long time support
16:01 nahuel but i imagine someone who install koha and do not do this for 10 years
16:01 nahuel ans have 10k users
16:01 nahuel s/s/d
16:03 cfouts what's up?
16:04 gmcharlt cfouts: nahuel has some questions about shrinking innodb datafiles for tables and databases that do lots of inserts and deletes
16:05 cfouts not much to be done about it, unfortunately.
16:05 nahuel :s
16:05 nahuel you let the databases growing ?
16:06 cfouts well, they don't grow without bounds, generally.
16:06 cfouts it grows to accommodate however much data you put into it.
16:06 cfouts and then never shrinks if you delete some data.
16:07 nahuel yes of course, but i'm thinking to some libraries that received/delete lot of datas each months
16:07 nahuel the database is permanently growing
16:07 nahuel and I think you can count the size in GB each years
16:07 cfouts lots of that is often the sessions table. have you looked at that?
16:08 nahuel yes
16:08 nahuel too big ;)
16:08 nahuel but what do you do to decrease it ?
16:08 cfouts it also contributes in terms of table fragmentation, which gives very sparse and inefficient data storage.
16:08 cfouts 'truncate sessions'
16:08 cfouts at night when not many people are using it.
16:08 nahuel but innodb do not delete datas from disk
16:09 cfouts no, it will never shrink, but periodically truncating the sessions table will keep it from growing further.
16:09 hdl_laptop cfouts: having decent expiry session time would be also a solution
16:09 cfouts I've got a bug in for that.
16:10 nahuel mc should look to fix this
16:10 cfouts that's the ideal solution, but truncate works for now
16:10 nahuel but as i understand do not decrease the disk space used?
16:11 cfouts no, you have to dump and rebuild to shrink the database file size.
16:12 nahuel dump/rebuild is bad in production databases amha
16:12 cfouts yes, it's not an ideal solution, that's for sure.
16:12 mc nahuel, i'm reading
16:13 nahuel mc, :)
16:13 nahuel but this innodb problem is amha a big problem...
16:13 cfouts you can do other things, like connect a replication slave to create a new copy of the data
16:13 cfouts disk space is pretty cheap. I don't know how big of a problem this is.
16:14 nahuel The problem is you do not control the disk space grow
16:14 nahuel it do not more depend of the data in the database
16:14 nahuel but the USE of the database
16:15 mc nahuel, what do you expect from me ... i'm just a rookie with mysql
16:15 gmcharlt but that's always the case
16:15 nahuel mc, we talked about sessions expiry
16:15 mc almost rookie in database in generam
16:15 mc ooh
16:15 mc yes:
16:15 nahuel gmcharlt, hmm no :)
16:15 gmcharlt a library could start out small, get a big grant, and increase its collection ten-fold
16:15 mc i'm aware
16:15 nahuel use myisam, the disk space used, is the data disk space...
16:15 nahuel (and indexes)
16:15 gmcharlt like anything else, disk space usage has to be monitored
16:16 cfouts isam doesn't support foreign key constraints
16:16 nahuel cfouts, yes i know :)
16:16 nahuel postgre yes :)
16:19 nahuel well at the beggining the question is : does liblime have a solution ?
16:22 gmcharlt well, at moment best idea is probably to dump/recreate database during scheduled downtime for an upgrade
16:22 gmcharlt or do something with a replica
16:23 paul_p gmcharlt: that was exactly the idea of nahuel : clean when upgrading !!!
16:25 nahuel :)
16:29 owen I'm surprised to realize that you can't search tags in Koha 3. Also surprised that I never thought of it until one of my staff asked about it.
16:57 nengard owen i was under the impression that since tagging was still experimental more cool features would be coming - eventually
16:57 nengard i think atz is the one to ask about that
16:58 owen Yeah, I'm sure. I was just surprised at myself for never noticing.
16:59 atz owen: the correct fix would be to index tags in zebra
16:59 hdl_laptop owen: fwiw, imho, tags should be considered as metadata.
16:59 hdl_laptop atz: same idea
16:59 atz but i don't have time/skillz/sponsorship for that
16:59 owen Yeah, my library is full of ideas but empty of cash.
16:59 atz it's not really experimental though.  it's pretty stable at this point.
17:00 hdl_laptop atm, multiple MARC flavour support is quite time consuming for all of us.
17:01 atz hdl_laptop: I imagine that would be very difficult
17:04 hdl_laptop mmm... not that much....
17:05 hdl_laptop if we add a new link to a subfield.
17:05 hdl_laptop But that would double the data.
17:05 gmcharlt conceptually, adding a marcflavour column to biblio and replacing preference lookups would go a long way
17:05 gmcharlt but there are lots of details to attend to
17:06 gmcharlt - indexing : would requires DOM mode to have enough flexibility
17:06 hdl_laptop And we should then add user information.
17:06 hdl_laptop gmcharlt: ++
17:06 gmcharlt - UI changes for user to know/set which MARC format is in use at a given point in time
17:07 gmcharlt - the obvious thought that if you make Koha support multiple MARC formats, you may as well go all the way and support native DC and other metadata formats
17:07 hdl_laptop Sure.... long way.
17:08 gmcharlt - finding a large rock to hide under as traditional catalogers and metadata librarians start fighting over the ILS ;)
17:09 hdl_laptop provided that traditional catalog is working fine, i think it would be OK for traditional catalogers.
17:10 gmcharlt of course, I was just joking
17:20 trepador hi i finally got install koha
17:20 trepador but when i try to conect with the navigator shows me this fatal error
17:21 trepador Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'
17:21 trepador someone knows what's the problem
17:21 trepador thanks
17:34 hdl_laptop trepador: yes.
17:34 hdl_laptop mysql seems not running
17:37 trepador sorry but i'am a newbie jejeje
17:37 trepador i don't have mysql-server installed
17:37 trepador now i'm installing with the web installer
17:37 trepador thanks
17:41 hdl_laptop trepador: I think that MANY things are told in INSTALL documents.
17:41 atz you can't do web install w/o mysql.  
17:48 trepador thank you hdl laptop, but i have no idea of apache and mysql , i start to learn it, two days ago
17:49 trepador i read many times the installation page on kubuntu hardy heron
17:49 trepador and one thing more i apologize for my poor english
18:10 rhcl I don't really know anything about rima-tde.net, but just as a side note it has some interesting google hits.
19:23 atz owen: what mechanism do we use for keyboard hotkeys, if any, in the staff interface?
19:24 owen It's not used extensively (or consistently, probably), but there is a jquery plugin being used in some places--the resident search form, for instance
19:25 atz i see jquery.hotkeys.js
19:25 owen Look in staff-global.js for where it is put to use
19:25 atz cool, thx
19:28 atz looks like alt+r, alt+u, and alt+q
19:28 owen Right, for checkins, checkouts, and searches
19:28 owen See this bug for complications: http://bugs.koha.org/cgi-bin/b[…]w_bug.cgi?id=2639
19:29 owen I haven't looked into the bug report because I always forget when I'm on my Mac :|
19:29 atz i'd say that's a minor bug unless it affects win32 FF or IE
19:34 atz hmm.... option r gives me ®
19:34 atz q => œ
19:35 atz u => opens some kind of dead-key extra-keyboard entry
19:36 atz maybe i don't even know how to get to "alt" on my keyboard
19:37 atz it's the same key as option on this small MacBook keyboard... but as a secondary
19:37 atz so maybe i need shift or fn or some other magic
19:39 chris morning
19:46 atz maybe alt's not working b/c i installed "Witch"
19:46 atz hi chris
19:47 atz same results w/ Witch disabled...
19:47 atz Œ‰¨
19:48 owen Is there a preference for controlling how transfers are handled by default in returns?
19:48 owen For instance: whether or not to automatically transfer items which belong at other branches
19:48 atz for shift + alt/option + q|r|u
19:48 chris i think so owen
19:48 chris but i may be misremembering
19:49 chris ill go look
19:49 owen Hmm... AutomaticItemReturn maybe?
19:49 chris hmm, my staff site is in greek
19:49 owen "This Variable allow or not to return automaticly to his homebranch"
19:49 chris this is gonna be tricky
19:49 owen :)
19:49 atz owen: i think there is a SomethingNeedsSomething syspref?
19:49 atz (search for need)
19:50 cait AutomaticItemReturn should be right, i remember i tested it wednesday
19:50 chris yep
19:50 chris If ON, Koha will automatically set up a transfer of this item to its homebranch
19:51 owen Yeah, looks like what we were looking for.
19:52 owen Too bad there isn't a "do not transfer" button offered like there is a "do transfer" button when it's switched off.
19:53 cait HomeOrHoldingBranch might also be interesting
19:54 chris thats more for if you can issue an item
19:55 cait or if it has to go back first, yes
20:10 cait its greek g
20:10 cait ah meant for chris
20:11 chris heh, yeah you can change it
20:11 cait need login first :)
20:13 chris done :)
20:14 cait i did it too - parallel? g
21:35 danny i have a patch that will make a couple of files obsolete, what is the general procedure for that? Is a separate patch for removing old files best or including that in the main patch?
21:36 gmcharlt put in the main patch if removing the files is logically part of the change
21:36 gmcharlt if it's just something you noticed along the way, put it in a separate patch
21:36 danny ok
22:34 SelfishMan mwalling: That's funny
22:34 SelfishMan oops, wrong window
23:14 atz chris: besides the "encoding" field not being populated, what else might cause working z3950 targets in 2.2 to break after upgrade to 3.0?
23:15 atz it seems odd that the 2.2 implementation works better out of the box than the 3.0 one
23:53 chris the 2.2 worked totally different
23:53 chris you had to have a daemon type thing running on your box
23:53 chris that actually did the z3950 searches
23:54 chris and it didnt use ZOOM
23:54 chris so theres a few things different
00:08 atz chris: any recommendations for getting busted ones working in 3.0?  Clients can't be persuaded that the sources they were using last week are unnecessary in the fancy new version of the software.....
00:09 atz i suppose i'll just try recreating broken ones from scratch.  it is probably a default value (in addition to encoding) that doesn't get applied by migration scripts...
00:10 chris could well be it
00:10 chris give that a whirl then see if you can spot the diff
00:12 atz interesting
00:12 atz was it an indexdata daemon also?
00:12 chris nope, totally custom written
00:12 atz wow... had no idea
00:12 chris lemme find it
00:13 chris http://git.koha.org/cgi-bin/gi[…]ccd56706dfbf49df8
00:13 chris the z3950-daemon-shell.sh and z3950-daemon-launch.sh
00:13 chris in there
00:15 atz chris: http://git.koha.org/cgi-bin/gi[…]ccd56706dfbf49df8
00:15 atz uses ZOOM ?
00:16 chris oh maybe it was changed
00:16 chris it used to use plain old Net::Z3950
00:16 atz ah, interesting
00:16 chris must have changed whent that changed to ZOOM
00:17 atz alright.... now here's an interesting part:
00:17 atz use Encode::Guess;
00:18 atz my $decoder = guess_encoding($marcdata, qw/utf8 latin1/);
00:19 atz might have to look at that again later
01:08 mason atz: i wonder if you are hitting a similar encoding bug to me?
01:09 mason "besides the "encoding" field not being populated,"
01:10 mason tho, my encoding bug is a bit more obvious than yours , it seems..
06:52 mc hello world
07:25 chris hi mc

| Channels | #koha index | Today | | Search | Google Search | Plain-Text | plain, newest first | summary