IRC log for #koha, 2005-07-08

All times shown according to UTC.

Time S Nick Message
12:08 thd tim: I was most interested in the MARC record issue.  I have no knowledge of circulation issues, they are not formally part of my problem space.  I had been pleased to see the code that you had posted on the list for helping me think about the MARC records issue.
12:19 kados tim: yes ... thanks very much ... I'm still waiting for the client to export his data
12:19 tim hdl: did you see my post about the bug in the original script?
12:20 tim kados: I'm not sure how good I did at setting up the marc/db links, but could send them to you if you want them.
12:21 tim They would be in marc_tag_structure and marc_subfield_structure wouldn't they?
12:21 hdl no.
12:21 tim hdl: are you converting from Sagebrush Athena?
12:22 hdl I am french ;)
12:23 tim I'm talking to the wrong person :)
12:23 hdl no problem :)
12:23 tim I seem to do that a lot.
12:25 kados tim: did you have to redefine the marc links to match your records?
12:26 kados tim: (asside from holdings)
12:58 tim If I remember right, most of the changes were to try to get it to display more like the Athena easy edit screens.
12:58 tim To make the switch easier on staff.
12:59 tim I did that early on and then took so long working on holdings.  I need to check my work and see if it still makes sense to me.
15:11 tim Gonna try talking to the person I mean to talk to this time.
15:12 tim thd: did you see my post about the bug in the original marc coversion script?
15:12 tim And I was also wondering if you're moving from Athena.
15:21 thd tim: I had seen your second post, which I believe was the one I had linked with the URL, where you corrected your code from your first message.
15:22 thd tim: I am not moving from Athena, but am just generally interested in the problems involved in moring MARC data between systems and how best to overcome them.
15:24 thd tim: s/moring/moving
15:54 tim It's the first and only perl script I wrote except for a few test scripts.
15:55 tim I hope it's helpful.
16:12 thd tim: It has helped me to think about the issues in migrating MARC records.  Thankfully, I do not have to migrate a system immediately.
17:40 chris morning
17:47 kados morning chris
17:47 chris heya kados
17:49 kados composing a summary of our progress with zebra integration atm
17:50 chris cool
18:03 kados mason: so ... is it ok if I list you as a MARC expert? ;-)
18:03 kados mason: I'm putting together a summary of where we're at with Zebra integration for koha-zebra and I'm listing roles as well
18:08 mason hi kados, my marc is thin and rusty ;)
18:08 mason im just googling zebra
18:22 kados :-)
18:23 mason tell be a bit about zebra
18:24 thd kados: My MARC familiarity is reasonably good, although, I gained much of my experience with MARC outside a library setting.  Do not let my confession about spending years as a bookseller mislead you.  William Moen, leading research into MARC at UNT, also spent may years as a bookseller before adopting library science as a profession.  That is usually done the other way round when librarians retire to take up bookselling.
18:25 thd kados: I have not had much occasion to study UNIMARC though.  I have been learning much more about UNIMARC in the past few days :)
18:26 mason this zebra?
18:26 mason A system with Zebra installed acts as a dedicated router. With Zebra, your machine exchanges routing information with other routers using routing protocols. Zebra uses this information to update the kernel routing table so that the right data goes to the right place. You can dynamically change the configuration and you may view routing table information from the Zebra terminal interface.
18:28 chris nope :)
18:28 chris http://indexdata.dk/zebra/ <-- that zebra
18:30 kados mason: sorry ... writing a summary ...
18:31 kados mason: there's a new Koha list: koha-zebra
18:31 kados mason: we're going to be moving from MARC-in-SQL to using Zebra as the primary storage and retrieval engine for Koha
18:31 chris for the marc data that is
18:32 chris marc in sql makes little sense really, when i think about it
18:32 chris you lose all the advantages of a RDBMS
18:33 chris and end up with a ton of redundant data
18:33 kados yep
18:34 thd I am having difficulty getting a question through on either the MARC or AUTOCAT listserves.  I never had trouble on the MARC list before a few months ago.  I think my *.com address is being identified as a potential source of spam.  Maybe I need to register a *.org address. : /
18:35 mason looks nice
18:35 mason esp. the pic of the zebra
18:36 chris heh
18:36 chris so our main task as i see it
18:36 chris is integrating with zebra in such a way that we can still offer our 2 views of the data
18:37 chris i counted 11 clients of ours, who use koha and dont ever want to to see MARC
18:37 chris and 2 who do
18:38 kados chris: wow ... I didn't know you had that many koha clients!
18:38 thd Green, Rebecca. Design of a relational database for large-scale bibliographic retrieval. Information technology and libraries v. 15, no. 4 (Dec. 1996)  p. 207-221.
18:38 chris storing bibliographical data in a relational db is easy
18:38 chris storing MARC isnt
18:39 chris anyway, dont get me started on marc .. we are stuck with it :)
18:39 kados not necessarily
18:39 chris i think in the near future anyway
18:39 kados with zebra you can change the underlying record format
18:39 chris yeah
18:40 kados without changing the api at all
18:40 chris some ppl just love to see 245a
18:40 kados which is why it's so nice
18:40 chris instead of title
18:40 kados right
18:40 kados well it makes a difference ;-)
18:40 chris :)
18:40 kados because if you're talking about 245a or 246b ;-)
18:40 chris unido
18:40 rach do they? mad buggers
18:41 chris and the refugee woman was quite keen (student librarian from vic)
18:41 rach ah right
18:41 thd the article has the same conclusions koha developers have discovered for themselves with more effort
18:42 chris yep kados, so as long as we can offer both views, im happy
18:42 chris both = lots of
18:42 chris :)
18:42 chris thd: i didnt even attempt to get MARC in .. i left that for the more masochistic programmers to try :-)
18:43 thd :)
18:47 thd There has been a discussion on the MARC listserv just recently about RDB and MARC with the usual negative conclusions.  The addition of XML was considered to be no advantage for the databse design problem.
18:48 chris yeah
18:49 chris it gets worse when you try to support more than one flavour
18:50 chris i think moving marc out of the db will be the best thing we can do
18:51 thd chris: I was wrestling with magic tricks to overcome the MARC21 UNICODE divide.  I have not found the right spell book :)
18:52 chris yeah, thats not even worrying about the other 20 thousand marcs (chris might be exaggerating slightly)
18:53 kados ok ... summary sent
18:53 chris cool
18:53 thd chris: fortunately, we are following after a significant degree of format conversion to MARC21 and UNIMARC
18:53 kados I spoke to sebastian this morning
18:53 kados he's interested in expanding the zebra api
18:53 kados perl api that is
18:53 chris cool
18:54 kados so that's something to keep in mind
18:54 thd kados: what is missing form the api currently?
18:55 kados thd: the perl api for zebra is not fully developed
18:55 kados thd: it was developed by a third party who's vanished ;-)
18:56 thd kados: you mean it does not support all the features that the C or whatever API does?
18:56 kados thd: there are other ways to tap into the api but a perl-specific interface would be ideal
18:56 kados thd: right
20:56 thd chris: are you stiil there?
21:49 chris am now
22:02 thd chris: I saw your post about 11 of your 13 customers prefer to never see MARC.
22:02 thd chris: Did I interpret that correctly?
22:02 chris thats the one
22:03 chris most of them are corporate or special interest libraries
22:05 thd chris: If you read paul's migration email carefully, he seems to intend to keep the database structure along with blob storage or whatever for MARC records that zebra can index.
22:06 chris yep, i dont imagine it will be a problem, as long as we make sure that then non-MARC cataloguing pages update the files zebra indexes
22:07 chris then=the
22:08 thd chris: I would imagine, there will always be ways to hide MARC from the user who may be frightened by 650  $a$x.
22:10 chris yep
22:10 thd chris: However, proper MARC support implemented well should give you the possibilty of serving a whole new class of potential customers.
22:10 chris its just that when marc was introduced, the non-marc cataloguing features lagged behind and werent tested as well as the marc ones (as much from my fault as anyone elses)
22:11 chris so just want to make sure we dont repeat our mistakes :)
22:11 chris thd: indeed and thats great, as long as we dont lose HLT (the original client and reason Koha exists)
22:12 chris (one of the 11)
22:13 thd chris: The potential new class of customers, would be much larger and potentially a much better source of business than the instutuions for which Koha had previously offered a solution.
22:14 chris yep, as i said thats great
22:14 chris as long as we dont turn into one of the big ILS vendors and forget about all the little ppl who made it possible
22:14 thd chris:  We will hide MARC from HLT.
22:15 chris im sure we will :) i was just saying that thats what my main focus with the zebra integration will be
22:15 chris in making sure it works for non-MARC .. ill leave the MARC bits too the people who know more
22:16 thd chris: My background is in bookselling, where I hid LIS stuff from the user while taking advantage of MARC records.
22:16 chris cool
22:17 thd chris: I am keen to make the system work well for the end user who should never need to know MARC.
22:17 chris cool
22:21 thd chris: MANY ILS systems require knowing how to encode 650##$a$x$z$y to use them at all or get much advantage from them.  Paul's work already goes a significant way to correct that.  I like to have data views that support the value of MARC but guide the user with meaningful labels where something is not self-evident.  No code numbers should trouble or frighten the user.
22:33 thd chris: There may be a standards compliant way of identifying the lesser degree of record detail that Koha had originally used with Dublin Core.
22:36 thd chris: Dublin Core actually scares me because it does not use code numbers and ignores the degree of detail found in full MARC.
22:36 thd :)
22:36 chris that would be cool
03:43 osmoze hello
09:44 kados morning owen

| Channels | #koha index | Today | | Search | Google Search | Plain-Text | plain, newest first | summary