Time Nick Message 11:06 kados speed in general 11:05 kados and were too ashamed to admit what it was ;-) 11:05 owen About the outage the other day? Or about the speed in general? 11:05 kados they must have found the problem 11:05 kados great 11:05 kados I just realized that I never did hear back from intelliwave 11:05 owen No real crawling slow times, even with heavy use in the past week. 11:05 kados cool 11:05 owen It's been pretty good. 11:04 kados owen: how's your network connection these days? 11:02 tim but when I look at the backup file it seems to have everything. 11:02 tim I was just trying to upgrade to 2.2.3 and the backup summary says it backed up 0 beblio entries, 0 biblioitems entries, 0 items entries and 0 borrowers. 09:41 paul so, it's cat is away ;-) 09:41 slef mmm, "cat is out" can mean "cat is hunting" 09:39 owen In English: When the cat's away, the mice will play 09:39 paul (in french we say : when the cat is out, mouses dances) 09:38 paul right. 09:38 owen We're going to have to get along without him for a while :( 09:38 owen I think he's about to leave for the American Library Association meeting in Chicago anyway 09:35 owen :D 09:35 paul (joshua was still here 1 hour ago, so don't expect him to be really good programmer today. Don't ask him anything important if you want my opinion ;-)) 09:34 owen Hi paul 09:34 paul hi owen. 08:51 slef good sysadmins read release notes too 08:47 paul (& that are minor from their point of view) 08:46 paul i don't have announced some bugfixes that are impossible to understand for librarians. 08:46 slef ok, wasn't in release notes, that's all 08:44 paul checking... 08:44 slef paul: did you merge bug 984's patch? 08:32 hdl Good! 08:31 kados excellent! 08:31 paul http://sourceforge.net/project/shownotes.php?release_id=336931 08:31 paul https://sourceforge.net/project/shownotes.php?release_id=336931 08:31 paul Surprise for everybody : 07:18 paul good sleep 07:17 chris 2 breaks in telecoms 2 trunk fibres .. 300 kilometres apart within 3 hours of each other .. thats one fast rat 07:16 chris heh 07:01 slef Today's award for discovering the blindingly obvious goes to the LA Times. 07:00 slef "The Los Angeles Times, has temporarily ended its short-lived trial which gave readers the chance to edit its editorials on its website [...] they decided to end the trial early on Sunday after explicit photos were posted" 06:56 slef I tell you, if we built houses as well as the internet, the first woodpecker would wipe out civilisation. 06:56 slef "A rat is being partly blamed for a major communications crash which has caused chaos in New Zealand." !?!? 04:48 jean :) 04:48 paul comme il me le renvoyait, je pensais qu'il y avait du nouveau ! 04:48 paul ah, ok. 04:48 jean Heu, je n'ai rien change au document et nous n'avons pas encore discute en interne des modifications a apporter au document 04:39 paul intitulé 'draft (again) of coding guidelines' 04:39 paul il faut regarder dans les archives de koha-devel, mail de stephen hedges du 16 juin. 04:38 paul mmm... pardon, il n'est pas (encore) sur kohadocs. 04:37 paul sur ce point. Il précise les règles et pratiques habituelles dans Koha 04:37 paul jean/francoisl : à propos du PAQ et des pratiques Perl, il y a un document depuis peu sur www.kohadocs.org (tout à la fin) 04:36 paul ou alors il m'a envoyé une mauvaise version du document. 04:36 paul jean, tu as changé qqc dans le document "optimisation..." parce que celui que vient de m'envoyer flc ne contient pas plus de lignes sur mod_perl ? 04:26 slef I guess it's pretty rare. 9 times out of 10, I write in German and the reply comes back in English, which is fine, but seems backwards to me with both of us using second languages. 04:25 slef there's something about getting a reply in german that makes me laugh 03:51 paul 'morning francoisl 03:47 paul good night 03:47 thd thd: well I must sleep ++++ 03:44 paul probably. 03:43 thd paul: Would using zebra correct for this upon importing a prexisting set of marc records? 03:42 paul no 03:42 thd paul: Was the original subfield order suppported prior to 2.2? 03:41 paul it's a limit of Koha 2.2 03:41 paul (you miss nothing) 03:41 paul no 03:40 thd paul: I just realised that Koha seems to have no means to preserve the order of subfields. 650#0$aVocal music$zFrance$y18th century would seem to become 650#0$aVocal music$y18th century$zFrance in Koha. In very many common and simple cases this problem would never be seen but it could occur in many fields. Am I missing something about how Koha stores data? 02:57 thd paul: and what about the missing marc fields in the standard framework distribution, especially the fixed fields? I cannot understand why the fixed fields would have been excluded except that they work differently from the others. 02:54 thd :-] 02:53 paul ;-) 02:53 paul as i have a customer with "builded" subjects, so I have to find a solution to this problem 02:53 paul no. 02:53 thd paul: a bug that will 'never' be squashed? 02:52 paul you should open a bug on bugs.koha.org 02:50 jean :) 02:50 thd paul: opac-detail 02:49 paul where opac-detail.pl or opac-MARCdetail.pl ? 02:49 thd paul: what about the links in the interface? 02:48 paul but the look is really poor. 02:48 paul thd : not difficult to search everywhere (with see also parameter) 02:48 thd paul: What is the difficulty about reimplementing a search for subject subdivisions such as 650#0$aVocal music$zFrance$y18th century ? 02:48 paul et voilà notre bon jean qui arrive, comme tout mercredi qui se respecte ;-à 02:47 jean hi 02:39 thd kados: Of course you need a subscription to the loosleaf service or an online subscription for full docs. 02:39 kados bbiab 02:37 thd kados: http://www.loc.gov/marc/bibliographic/ecbdnot1.html#mrcb521 02:36 kados there's some stuff in there 02:36 kados 1 | Young Adult. | NULL || 4330 | 147 | 521 | 20 | 0 | a | 1 | 3.7 | NULL || 4331 | 147 | 521 | 20 | 0 | b | 2 | Follett Library Book Co. | NULL || 4332 | 147 | 521 | 21 | 2 | a | 02:35 thd kados: 521 - TARGET AUDIENCE NOTE 02:35 paul (you missed my 9:10 sentence it seems :-D ) 02:34 kados ahh ... nice 02:34 paul (hdl waiting impatiently for a new computer...) 02:34 kados morning paul ;-) 02:34 paul did you recieve a gift this morning ? 02:34 paul 'morning hdl. 02:33 kados morning hdl 02:33 kados nothing in 526b or 521a 02:32 hdl hi 02:31 thd kados: in the form specified for marc. I will search marc bibliographic. 02:31 kados howdy osmoze 02:31 kados I can check my data pretty quickly if you do 02:31 osmoze hello 02:31 kados thd: do you know the tag/subfield it would be in? 02:30 kados thd: in lexile score form? 02:30 kados thd: where? 02:30 thd reading level should be encoded in marc records already 02:30 kados something like 600,000 images ;-) 02:29 kados (haven't done POST before ... GET would be pretty easy though) 02:29 kados something I won't be having time to do before ALA ;-) 02:28 kados yea ... it's just a matter of writing a little script to query the isbn search via POST and scrape the score 02:22 chris there are a few in wellington using koha now (high schools) 02:22 chris hmm lexile would be kinda cool for school libraries 02:17 kados glad you like it 02:16 thd kados: I did get your javascript demo working and it looks nice. I may have not noticed the bottom of the screen change at first. I was afraid to repeat my attempt the first time to avoid some problem that might crash my x-windows session. 02:13 kados maybe 'freeill' or something 02:13 kados yep 02:12 thd kados: I imagine you will need a somewhat different name to avoid a trademark conflict. 02:11 kados not related at all 02:11 kados nope 02:11 thd kados: so the references I saw to mike and ill are not related to the existing Open ILL system? 02:10 opaul when paul awakes, joshua is almost going to bed. 02:10 opaul koha is a 24/7 project. 02:09 kados (we may rename it) 02:09 kados to develop the new namespace for openill 02:09 kados I'm working with Mike Rylander 02:09 kados the other major open-source ILS, Evergreen will also support the new Openill 02:09 thd kados: meaning? 02:09 kados not that independent though 02:08 kados thd: yep 02:08 thd kados: then you have independent intentions as there is no FOSS ILL system yet? 02:06 kados yep 02:06 thd kados: they offer services based on their cold fusion implementation but no code. 02:04 kados which doesn't bode well for porting to php 02:04 kados plus they deployed on cold fusion 02:04 kados and no releases yet 02:04 kados they've been in production for over two years 02:04 kados thd: right ... well i'll believe it when I see it 02:03 thd kados: Were you planning to work with OpenILL for your ILL idea? They announced moving to PHP following by a code release in January. 02:02 kados chris I've been thinking about grepping lexile scores from http://www.lexile.com and displaying them in the opac 01:53 indradg apache is running fine though... so hopefully we'll have it worked out soon 01:53 kados indradg: ahh ... 01:53 indradg instead of mysql user 01:52 indradg /var/lib/mysql getting owned by root 01:52 kados indradg: what kind of problems? 01:52 indradg kados, we are having some problem with the mysql server permissions on the liveCD.... we are trying to figure it out.. hopefully it will be ready before u leave for ALA 01:50 thd At least many companies, even OCLC, are little friendlier to open source and Index Data licensing terms are now friendly where formerly they required a fee based commercial license for commercial use. 01:45 kados OCLC is good at that ;-) 01:44 kados yep 01:43 thd kados: Other examples are OCLC open sourcing some outdated software while the current version is closed source and then they prohibit public use detailed DDC hierarchies for their expressed fear of other libraries taking the DDC without paying a license fee. 01:39 kados ahh 01:39 thd kados: exactly 01:39 chris yep, i think that he was saying you should make that point 01:39 kados hmmm ... well LibLime doesn't have any proprietary systems 01:38 thd kados: I do not have a specific reference but I have increasingly seen companies such as ILS companies announcing some small open source component but you have to license their proprietary system for it to do any good. 01:37 chris i think he means, there are a bunch of companies who claim to use opensource, but only do in a very small way 01:35 rach don't stay up to late :-_ 01:34 kados thd: I don't really understand the question 01:29 thd kados: how does your marketing distinguish yourself from other companies wearing the open source banner in a small way while their core product is closed source? 01:02 kados heh 01:02 rach did I see someone offering to do a klingon translation? 01:02 rach :-) 01:01 kados if we're not careful version 3 might just be a star trek communicator ;-) 01:01 rach maybe I've seen it too often :-) 01:01 rach :-) 01:01 kados hmmm ... i actually like the older one better ;-) 01:00 rach I'm less a fan of the egg with the green middle as a logo, I like the newer one but it works with the girl with it in her hand 00:59 rach yep 00:59 kados yea ... the NZ stuff doesn't fly here as well ... folks are used to pushyness ;-) 00:58 rach so prolly good for your audience 00:58 kados heh 00:58 rach (and quite american :-) 00:58 rach it's quite busy, but that's pretty normal I think 00:58 rach nice use of people 00:57 kados any comments on layout/graphics? 00:57 rach :-) 00:57 kados next time ;-) 00:57 kados damn ... should have had you look at this last week ;-) 00:57 kados yep ... that would be better 00:57 kados right 00:57 rach I want to keep going with "you" 00:56 rach open source is cool, but has been hard to get into. We offer services to *you*. We offer services to other libraries who are vendor dependent 00:56 kados I think I see what you mean now 00:55 rach ah I read - (starting at open source is the difference) 00:55 kados here's why open source rocks 00:54 kados here's how we help 00:54 kados we can help you use it 00:54 kados but probably aren't using it 00:54 kados you 've heard about open source 00:54 kados we use open source 00:54 kados we're different because 00:54 kados we can help 00:54 kados you've got this problem 00:54 kados in my mind it reads: 00:53 kados :-) 00:53 kados i don't get it 00:53 rach then next sentance needs to still be personal - now you don't have to have this problem 00:53 rach then make it personal - you have had this problem 00:53 rach setting out the problem 00:52 rach so you start out general, in first para 00:52 rach ah no that's fine 00:52 kados so instead of "But lack of vendor support has made it impossible for many libraries to benefit" 00:52 kados :-) 00:52 rach (and if they don't they will stop reading :-) 00:51 rach so you could just keep it personal - so first line, we've established they need an OS vendor 00:51 rach but the next line changes focus and is back out to "other libraries 00:51 rach well the first line is "personal" says your 00:50 kados yep 00:50 rach as everyone hates their vendors as well :-) 00:50 kados hehe 00:50 rach it's like "used car salesman reliant" 00:50 rach yeah 00:49 kados hmmm ... i'll have to ask my librarian friends ;-) 00:49 rach yep - vendor reliant I think would have a negative connotation here - umm, reliant meaning sort of tied to 00:49 kados so how would you put it rach? 00:49 indradg rach, i agree... over here that line wud spell to some ppl "we think we understand ur job better thanu do" 00:48 kados or maybe I've just been looking at it too long ;-) 00:48 kados maybe it's an american thing 00:48 rach maybe it's cultural :-)_ 00:48 kados I don't quite see that tone 00:48 indradg rach has a point 00:48 kados huh 00:47 rach it's the vendor reliant that I thought might get a few backs up 00:47 indradg that sounds mucho better! 00:47 rach yep 00:47 kados We make it possible for vendor-reliant libraries to use open-source software--like Koha--by providing them with outstanding support and training options. 00:47 kados the final proof has: 00:47 rach this stuff is hard tho 00:46 kados he 00:46 rach and you don't need to feel like a looser cause you can't do it yourself :-0 00:46 rach ie - you don't need to hire new people 00:46 rach we make it possible for libraries like yours to use OS software like koha, by providing outstanding support and training for your existing staff 00:45 rach ah and so - the next sentance is a little negative 00:45 kados gotcha 00:44 indradg kados, i need to check out on that... was away from city for the last 36 hrs... just got back 00:44 kados hehe 00:44 rach unless you're actually supporting other vendors - rather than the liabraies? 00:44 kados indradg: how's the livecd coming? 00:44 kados indradg: thanks ... it prints at 8.5/11 in 00:44 kados indradg: glad you're around ;-) 00:44 rach yeah which is wrong, they aren't the vendors 00:44 indradg kados, nice job.... how big does this think print in hardcopy? 00:43 kados or I'm saying tha tthey're vendors ;-) 00:43 kados right 00:43 rach well you're saying they do :-) 00:43 rach ? 00:43 kados or do the librarians have need of vendors ;-) 00:43 kados do vendors have needs? 00:43 kados it's kinda ambiguous too 00:43 rach I think it's the "on" 00:42 kados yea ... that would be better 00:42 rach "we founded lib lime to meet your needs for an open source vendor" 00:42 rach "we founded liblime to meet your vendor needs on open source 00:41 kados what's that? 00:41 rach and a turn of phrase that is a bit odd to my "ear" but may be how you'd express it 00:41 kados yep 00:41 rach ah well next time :-) 00:40 kados I noticed it on the proof 00:40 kados yea ... too late to fix that now ;-) 00:40 rach a slightly odd hyphenation - in-teroperability 00:39 kados :-) 00:39 kados it's under RSS 00:39 rach although I don't see XML in there 00:39 kados hehe 00:39 chris :) 00:39 rach looks to be fully buzzword compliant :-) 00:39 rach looks good 00:36 kados other than that it's pretty much the same 00:35 kados (two problems on it that we fixed in the final proof 1) layer prob with one of the blurry opensearch proxy images and 2) on the outside-backside there's a square around the Koha logo 00:34 kados I'd love to get your reaction to the brochure 00:34 kados hehe 00:34 chris its just us poor saps in the burbs that have to pay :) 00:34 rach oh yeah, no money issues, that's what it can be erratic :-) 00:34 rach it's just a bit erratic 00:34 kados ahh ;-) 00:34 kados right ... well I meant having to pay and all that 00:34 chris rach is ok, they are on a flat rate plan :) 00:33 rach we don't have bandwidth issues all the time 00:33 kados actually, that's not the final revision ... 00:33 rach it's here 00:33 rach :-) 00:33 kados in case you don't care ;-) 00:33 kados http://liblime.com/liblimebifold.pdf 00:33 kados well ... since you've got bandwidth issues better ask him for it -- it's quite large 00:32 kados brochure 00:32 rach nope 00:32 kados did chris show you our brochures? 00:32 kados thank you very much 00:32 rach cool :-) 00:32 kados rach: absolutely 00:32 kados on the open-source front it'll be us and indexdata 00:32 rach was the box any use to you? 00:32 kados also a bit nervous 00:31 kados pretty excited 00:31 rach so are you excited about going off to ala? 00:31 kados :-) 00:31 rach well of course not right now :-0 00:31 kados heh 00:31 rach you have personal time kados? 20:34 kados slef: (right now that's pretty much taking up all my personal time) 20:33 kados slef: I'll do a bit more research about the issue when I get back from ALA 20:33 kados slef: I see your point about RSS 2.0 vs. RDF 19:36 thd slef: Then the US would just need to persuade the rest of the world to adopt X12 extended :) 19:35 thd slef: I have a significant background in the book trade. X12 format XML is used for a book trade ordering standard in the US. Perhaps that could be adapted or extended for ILL. It would be nice for one format to be used for both orders and loans. 19:31 thd slef: I assumed he was working with an aready existing standard. I guess I am forgetting something. 19:28 slef kados? 19:26 thd slef: Which someone? 19:26 slef It sounds like someone's working on ILL support in XML anyway... ;-) 19:24 slef Not sure about ILL support. It might need developing. There's already taxonomy and search support for years now. 19:21 slef RDF is a more general bunch of tags 19:21 slef and it's rss 0.9 / 1.x 19:21 slef well, it's still going on, but it seems bleak... I would hope that librarians of all people would appreciate the benefits of namespaces and rdf 19:20 thd slef: What support is there for ILL over rdf 0.9 / 1.0? 19:19 thd slef: So the difficulty is that rdf 0.9 / 1.0 lost the marketing battle to rss 0.92 / 2.0? 19:14 chris k 19:14 rach I'll pop back when I'm out this afternoon and see if the stop words thing is it, and if not, I'll go down the error logs route 19:13 rach so we go and add a biblio, and it's up to number 24, and it adds a group, but not the item 19:13 chris no idea then, nothing in the error logs? 19:13 rach it doesn't come up with an error at all - it's a bit odd actually 19:13 rach no it's not doing that 19:13 chris if its internal server erroring anyway 19:13 rach ah yes 19:12 chris probably the no stop words 19:11 rach ahhh I have just read chris's e-mail 19:11 rach hmm - I just visited a site who've got 2.2.2 on windows xp I think, and I believe it's not saving the item data 19:10 slef they're probably in cahoots with logging companies, so block pro-tree sites ;-) 19:09 thd slef: maybe my isp does not want me to see this. 19:05 slef worksforme and registration looks ok 19:03 slef hrm? 19:02 thd slef: no dns for http://www.thewalks.co.uk/ 18:59 thd slef: yes write it down before writing it up. Most peoples best thoughts are forgetten. At least there is a log here :) 18:57 slef by the way, http://www.thewalks.co.uk/makerss.rc if you like fun shell script 18:57 slef not thought of that one before 18:57 slef mmm, maybe I should write that up 18:56 thd slef: your perl analogy is clear :) 18:56 slef that is, a large perl system without using modules at all 18:55 slef yes, it used to be done and can still be done, but most people don't do it any more 18:55 slef basically, imagine writing a large perl system all in the global namespace 18:54 slef It can't carry some (including itself) but also it cannot be combined at all with ones doing the same bad practice as itself and cannot be processed with some standards-compliant XML tools (but mostly they are adding workarounds for this sort of stunt). 18:51 thd slef: So rss 2.0 cannot carry some types of xml files? 18:49 slef http://cr.yp.to/ IIRC 18:49 slef Daniel Bernstein, developer of the (not free software) qmail and djbdns 18:48 thd slef: excuse my igorance. What does djb stand for? 18:46 slef uh, do you know about djb? Basically, ignore the parts of standards that you don't like ;-) 18:46 slef They were produced essentially by Dave Winer, the djb of the Semantic Web. 0.92 backed by Userland Software - can't remember whether 2.0 was released before Dave Winer moved it to Harvard or not. 18:43 thd slef: So how did rss 0.92 and later versions come to be developed in a non-standards compliant manner. 18:43 slef ...so developers looking to quickly add RSS support add the non-XML/RDF one :-/ 18:43 slef and then the next Really Simple Syndication was released as RSS 2.0... 18:42 slef RDF Site Summary was updated with new modules (Dublin Core!) to become RSS 1.0 18:42 slef RDF Site Summary was RSS 0.9 and then the first Really Simple Syndication was released as RSS 0.92 18:41 slef the rss problem is mainly that there are two: RDF Site Summary (RDF is nice and librarians seem to like it, which I think is promising) and Really Simple Syndication (a mix of ideas from Channel Description Format and RDF Site Summary with marketing chutzpah mixed in) 18:39 slef this is xml and including objects in each other 18:38 slef there are probably other cases where this breaks, but I'm not 100% sure... encryption seems a likely one 18:35 thd slef: I do not have enough rss background to appreciate the problem fully. I know I am not really with the 21st century unless I know rss ;) 18:32 slef there's no way of saying where the tags to describe it come from, unless we define some as a workaround 18:32 thd slef: So the channel data is undefined in your example? 18:30 slef namespaces canonicalise tags, similar to module hierarchies in perl - "is this $Version $::Version or $DBI::Version?" 18:28 thd slef: I am somewhat confused about the use of namespace in the discussion 18:24 slef so, say I have a search engine of RSS-2 feeds... I can't return any channel details in the results because they're not in a namespace 18:23 slef that case is actually solvable, but not obvious 18:19 slef actually 18:18 slef say I have a search engine search engine, which searches for a matching OpenSearchDescription 18:17 slef um, the results can't be expressed, basically 18:16 thd slef: with the results returned or the query? 18:16 slef as in, some types of searches need ugly workarounds... it's totally unnecessary to do that in xml. xml is meant to be extensible. 18:15 slef thd: it conflicts with some possible searches. 18:14 thd slef: what is the practical implication of the opensearch conflict with xml? 18:14 slef http://purl.org/rss/1.0/modules/search/ 18:12 slef kados: the OpenIll namespace is (should be?) seperate; are there many opensearch portals?; connecting to searches should be done anyway, through html link or RSS-1 textinput. 18:05 kados slef: opensearch does three things: standardizes ILL with the OpenIll namespace; opens up Koha catalogs to all opensearch portals; brings live search results RSS feeds to Koha 18:00 thd slef: the funny part is that overwriting the relation attribute breaks the use of some blogging software micro formats that use the relation attribute when they appear in comments. 17:58 thd slef: the relation attribute supports multiple values but there has been a problem with some blogging software overwritting the relation attribute with a 'nofollow' value without preserving the original values as part of an anti-link spamming measure. 17:55 slef no, just the standardised contents 17:55 thd slef: w3.org has the syntax standard I thought you had found a list of standard implementations for the relation attribute. 17:43 slef kados: *sigh* will it become terribly polarised? I just don't see what opensearch brings and you don't seem to express it. 17:42 slef kados: this sounds like you not seeing how in practical cases using javascript will cause any problem ;-) 17:42 kados slef: let's continue this on-list 17:41 slef thd: and then -> rel -> link-types 17:41 slef thd: "Links" sorry 17:41 slef so why would anyone want to do that? It's not like it's hard to find free software XML parsers that handle namespaces 17:41 thd slef: www.w3.org/TR/html401 is a table of contents page. Which section is relevant? 17:41 kados I don't really see how in practical cases using rss-2 will cause any problem 17:40 slef let's use XML that doesn't conflict, like RDF 17:40 kados so what? 17:40 slef rss-2 conflicts with various other XML specs (including OpenSearchDescription, apparently) 17:39 kados I'm sick of following the leader and ending up with shitty library interfaces etc. 17:39 kados how do standards get written in the first place? 17:39 kados what good is XML if I can'd define how to use it? 17:39 kados that's the point! 17:39 kados you can't break xml 17:39 kados slef: if a server supports the OpenIll CQL namespace 17:38 slef but I don't see what opensearch gets you over HTTP GET, apart from breaking XML. 17:38 kados slef: yep ... withing the query term you us CQL 17:38 slef I can see why CQL could be useful at this level 17:38 kados slef: it's quite easy really ... 17:38 kados slef: there's nothing hard about it 17:37 slef What's the hard part of this problem? 17:37 thd slef: all but MARC21 and USMARC are a larger market to start with 17:37 kados slef: yep 17:37 slef kados: so how do you have an opensearch which returns a list of opensearches? Define a new namespace iCantBelieveItsNotOpensearch? 17:37 kados slef: what I care about is coming up with a really great federated search 17:36 kados slef: I don't give a rats ass what the root node says 17:36 kados slef: that's just semantics 17:36 chris which MARC ? 17:35 slef ...which is quite funny to me. ;-) 17:35 slef I think that means you can't construct an opensearch which returns opensearches. 17:35 slef http://a9.com/-/spec/opensearchdescription/1.0/ 17:35 slef + Note: the xmlns attribute must equal 17:35 slef Description document. 17:35 slef * OpenSearchDescription - The root node of the OpenSearch 17:35 slef oh my 17:35 thd kados: Why isn't complete marc part of the standard install for koha? 17:33 slef generally... found them in www.w3.org/TR/html401 17:32 thd slef: for <link rel="XXX" .../> types do you mean for opensearch only or generally? 17:30 slef Could do cool auto-discovery things with <link rel="index" type="application/rdf+xml" href="/path/to/xmlsearcher" /> telling you to try /path/to/xmlsearcher?querystring 17:28 thd slef: costa rica looks like the best option from the US 17:27 slef thd: hello Angola? 17:26 thd slef: I have researched those countries that may still be free from software idea patents to host a server once all the rich countries fall in the ip wars 17:24 slef who maintains the list of <link rel="XXX" .../> types? 17:23 thd slef: prior art does not matter much if the cost of litigation expense is your real risk. 17:22 slef heck, <isindex> is almost prior art ;-) 17:22 slef actually, opensearch has prior art in plone, I'm pretty sure 17:22 thd kados: the missing information can always be added to the framework by the user but when it is not standard an interested library ought to be very suspicious about koha despite its favourable direction. 17:19 thd kados: koha needs biderectional mapping for marc so any marc record imported can be modifyied and exported in marc communications format without data loss from the default framework. This requires a complete one to one mapping to be standard for every field subfield and indicator any record might ever have. 17:17 slef ILL (Interlibrary Loan) protocol (ISO 10160/1) 17:15 thd kados: there are two three public paragraphs on agogme.com. Generally browse oriented information finding, concentrated on bibliographic records with extensions to other information domains. 17:12 kados what's the project? 17:11 thd kados: my interest is really much broader, considering the favourable directions the project is going. If it can query millions of records efficiently then I will consider developing with koha although I have been using zope for my projects experiments because of some nifty features that are difficult to implement in perl. 17:08 thd kados: well I am interested in all bibliographic automation systems and koha has added almost enough MARC support for me to use it at least for copy cataloguing. 17:06 kados so what's your interst in Koha? 17:05 kados right ... got it now ;-) 17:05 thd kados: Dukleth 17:04 kados thd: heh 17:04 thd kados: you have not done a whois on agogme.com yet? :) 17:03 thd owen: who does? I have had no answer on the devel list and the issue is critical for using koha to copy catalogue. 17:03 kados thd: so th is for thomas ... what's d for? 17:02 owen Sorry, my thing is templates, mostly. I don't know enough about imports to be able to help 17:02 thd owen: do you know anything about the new bug where marc import fails when no isbn is present in the imported record? 17:00 owen It's an old bug I forgot to commit the fix for 17:00 owen NPL is the only one with the WorldCat link 16:59 thd owen: all templates or just npl 16:58 thd kados: agogme.com 16:57 kados thd: so where are you from? 16:56 owen Template bug 16:54 thd kados: no this is only in koha 16:54 kados thd: no idea ... take it up with OCLC ;-) 16:54 thd kados: Why do all Open WorldCat searches have 'england' in the query string? 16:53 kados thd: sort of 16:53 thd kados: are you still here? 16:50 thd I have done some work on a stand means for changing the base url for public and cross-institutional use otherwise the base only points to a fixed resolver, maybe not the one at your institution if you have found the openurl in a public place 16:47 thd slef: mostly used for accessing journal databases in academic libraries with many different databases rather than mostly consolidated by ebsco or proquest as at many public libraries with less need for openurl at present 16:45 kados nite paul 16:45 paul have a good day. 16:44 thd slef: openurl allows persistent access to the most appropriate copy of a biblio for the institution where the user is affiliated 16:44 kados ok ... meeeting over 16:44 slef opensearch looks proprietary *shrug* 16:44 paul ok, giong to bed now 16:43 kados which is what opensearch/openIll is 16:43 slef yep, standards page a bit thin 16:43 kados not any standards for how to 'roll your own' 16:42 kados but looks like just 'vendors' who provide federated searching 16:42 kados right ... I'll take a look 16:42 slef there's some stuff there about federated searching 16:42 kados journals, databases, etc. 16:41 kados openurl is a linking method for keeping track of subscriptions to various online stuff 16:41 slef I don't remember openurl 16:41 kados and it's included in the portal as one of the result sets 16:41 kados like I seid, CUFTS is an openurl linker 16:40 kados not really for ILL I don't think 16:40 kados that's just openurl stuff 16:40 kados yea 16:38 slef The Library of Congress Portals Applications Issues Group http://www.loc.gov/catdir/lcpaig/ 16:37 thd So, what are the difficulties to restoring subject linking where scince--methodoly links to science--methodoly but not science? 16:36 kados heh 16:36 slef why not start right? It's not like javascript is easy to write ;-) 16:36 paul almost midnight here. 16:35 paul ;-) 16:34 slef then it's just a beta... then it's just a first production roll-out... 16:34 kados :-) 16:34 thd kados: sorry humour :] 16:34 kados thd: it's just a proof-of-concept ... 16:33 thd kados: everything should work in lynx, links, elinks if it can without client side javascript. 16:33 kados adjourned even ;-) 16:33 kados OK ... meeting adjurned 16:32 kados :-) 16:32 slef no, my eyes are buggy, that's all 16:31 kados I'm all for text-based interfaces ... but you're insane ;-) 16:31 slef so, it's going to wait until tomorrow, when my eyes have recovered 16:31 kados won't work if it doesn't 16:31 slef dunno 16:31 kados does links support XMLHttp? 16:30 kados slef: let's talk about this after you've seen it (so we're on the same page) 16:30 slef I don't have a build of links with javascript support handy. 16:30 thd kados: as long as the targets are all z39.50 that is good and every server should support z39.50 16:30 kados yea 16:30 slef which demo? Your javascript one? 16:29 kados slef: have you seen the demo? 16:29 slef why "with opensearch"? Isn't it just "with a defined API"? 16:28 kados very easily 16:28 kados but with opensearch we can proxy _any_ z39.50 target 16:28 thd slef: agreed, there are problems with poorly defined search queries that may work for one target but not others 16:28 kados so it's pretty worthless 16:28 kados yea ... they did ... but they haven't realeased a stitch of code in three years and implemented it in coldfusion anyway 16:28 slef No, I don't know what's out there. I'd only got as far as researching CQL by today :-( 16:27 kados :-) 16:27 kados ahh ... you mean OpenILL (the other OpenILL?)? 16:27 kados got a link? 16:27 kados haven't heard of that 16:26 slef This implementation scares me 3 ways though. Haven't the library and information scientists cooked up one based around RDF and Dublin Core yet? 16:26 kados with some standardization 16:26 kados slef: the power of using XML for returning results is that I can do anything I want with it 16:26 slef FWIW, I think the idea of a federated search is a good one. 16:26 kados slef: so we'll have to avoid that then ;-) 16:25 slef kados: sometimes an XML processor that doesn't know about RSS-2's special requirement will not make it the default namespace and suddenly most RSS-2 tools don't recognise it. 16:24 kados slef: right ;-) 16:24 kados slef: breaks? 16:24 slef there are two things called RSS, some confusion marketing and an april fool's joke gone wrong 16:23 kados thd: sholdn't be ... I've got a fedora box it's running on fine 16:23 kados slef: every six months or so I forget how all the rss stuff works 16:23 thd kados: could it be an OS issue? I am using Debian Sarge presently? 16:23 slef unfortunately, it's building on shaky foundations and stuff breaks when you stretch it far enough 16:23 kados slef: right 16:22 kados thd: so if you can enable javascript for a minute you'll see the demo 16:22 slef (RDF, RDF Site Summary/RSS 1, and Semantic Web) 16:22 slef kados: quite likely. The RSS 2 crowd are better salesmen than the RDF ones. 16:22 kados thd: well ... the page requires javascript to work 16:22 owen The demo works fine for me in firefox (Win, 1.0.4) 16:21 thd kados: yes and it did nothing but impair keyboard shortcuts when I tried 16:21 kados slef: so that applies to a9.com too then 16:21 kados thd: hmm ... sure you've got javascript enabled? 16:21 slef it's not... it's disembodied junk floating in xml 16:20 thd kados: dying only because I could not get you demo to work in firefox earlier 16:20 kados slef: dunno 16:19 slef kados: and what namespace is rss and @version in? 16:19 rach but am happy to offer moral support 16:19 kados :-) 16:19 kados anyway ... meeting seems to be dying down 16:18 kados so two namespaces ... opensearch and openIll 16:17 kados <rss version='2.0' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:openIll="http://open-ils.org/xml/openIll/1.0"><channel> 16:17 kados slef: the namespace is listed in the link above 16:16 kados (right now it just handles the relevance ranking) 16:16 slef kados: "The elements defined in this document are not themselves members of a namespace" (Really Simple Syndication spec) 16:16 kados that Mike Rylander and I have beeen working on 16:16 kados with a new namespace OpenILL 16:16 kados using opensearch 16:16 kados that's the XML results for a generic search on 'cats' 16:15 kados slef: so if you look at this:http://search.athenscounty.lib.oh.us/cgi-bin/koha/opensearch?q=cats 16:15 kados slef: just because we need rss in the namespace doesn't mean we can't expand it 16:14 kados slef: seems to be working ok sofar ;-) 16:14 kados paul: exactly 16:14 paul ok, got it. 16:14 slef kados: RSS 2.0 doesn't support XML namespaces, always needing rss in the default namespace. 16:14 paul that's where ILL arrives. 16:13 kados paul: and let users 'request' items from other libraries too 16:13 kados paul: we can go another step 16:13 kados paul: when we get NCIP going 16:13 kados paul: basically 16:12 kados thd: in fact, the CUFTS listing there is an openurl resolver for journals 16:12 paul so, you lpan to use this for OPAC. And in koha-db, we add a table where we store "opensearch servers to query", and, if the user request, we extend a search to other catalogues. that's it ? 16:12 kados thd: a9 doesn't ... but there's no reason that opensearch can't 16:12 kados slef: we're expaning on the namespace so all the usual rules apply 16:11 kados paul: the sites it's querying use perl server-side to generate the XML 16:11 paul ok, good. 16:11 kados paul: (but the page is just html) 16:11 kados paul: the proof-of-concept is a mixture of perl and javascript 16:10 paul or just html/javascript client side ? 16:10 paul is all of this in Perl ? 16:10 thd kados: does a9 have any support for openurl? 16:09 kados paul: (in fact that's what my portal does for NPL's dataset) 16:09 kados paul: we can translate Z39.50 results into opensearch results very easily 16:09 kados paul: but even if they don't 16:09 kados paul: ideally yes 16:09 paul RSS. 16:08 paul the "queried DB" must support what ? opensearch standard ? 16:08 slef kados: how can opensearch be an open standard when it builds on Harvard's Really Simple Syndication (aka RSS 2.0) and its copyright "is the property of A9.com"? 16:08 kados and returns results in RSS format 16:08 kados paul: opensearch is http-based GET 16:07 kados a journal database (CUFTS) 16:07 paul and what protocol does opensearch use to query databases ? 16:07 kados you've got three ILS catalogs 16:07 kados so with the above link 16:07 kados paul: exactly ... catalogs AND electronic databases AND journal dbs AND web AND local collections ... list goes on and on 16:06 paul ? 16:06 paul KOha + other catalogues 16:06 kados and items you can link to electronically 16:06 paul multiple catalogue querying ? 16:06 kados physical items you can check out somewhere 16:06 paul ok, I think I understand. But what will we use this for in Koha ? 16:06 kados but we've identified at least two kinds of items 16:06 kados we're still working out the details of how exactly to taxonomize the groups 16:05 slef stop taunting me. 16:05 kados all the same 'kinds' of items apear in the same column 16:05 kados the patron sees a list of results from many institutions 16:05 kados you'll see what I mean 16:04 kados in the above link 16:04 kados so if you change the "Display style" to "Merged" 16:04 kados of results 16:04 kados to allow ranking 16:04 rach they aren't up yet 16:04 kados namespace even 16:04 kados so the idea is that we extend the boundries of opensearch namespage 16:03 kados paul: sure 16:03 slef well, thank you from my poor eyesight :P 16:03 paul javascript problem it seems. 16:03 kados thd: it's just a proof-of-concept 16:03 paul kados, could you explain what we could use this for ? 16:03 thd kados: what is the problem for other browsers? 16:03 kados slef: yep ... need mozilla 16:03 slef using lynx 16:03 kados slef: using mozilla? 16:02 slef kados: that page does nothing here 16:02 kados the idea of ILL 16:02 kados the Evergreen folks (particularly Mike Rylander) and I have been mulling over 16:02 kados (note that it only works well in mozilla) 16:02 slef (not a worry for me or paul yet, though) 16:02 kados slef: it's an open standard 16:02 kados http://liblime.com/opensearchportal.html 16:02 slef so, this is likely to be patent-encumbered? 16:01 kados slef: yep 16:01 slef kados: a9 is amazon? 16:01 slef kados: CQL looks to me about as far from that as you can get without using XML or a programming language syntax 16:01 kados any opinions? 16:01 kados ok ... so how about opensearch (5 more mins?) 16:00 kados slef: which is why I like CQL ;-) 16:00 kados slef: I agree completely 16:00 slef I feel we should be moving towards more search-enginey type freeform query languages if we can. Unfortunately, I can't express that well yet. 16:00 kados ok ... well let's put it asside for now 15:59 slef anyway, my general feeling is that this is too complicated to expose anywhere outside the backend and even then it looks like it should be kept away from internal interfaces 15:59 chris i see it as kinda secondary 15:59 chris i dont really have an opinion at this point 15:58 kados chris: any words about CQL? 15:58 kados (which I hope we are with opensearch) 15:58 kados just like it would be if we were doing an xml namespace 15:57 kados slef: no ... that's handled server side of course 15:57 slef kados: so users would have to start queries with >bath="http://zing.z3950.org/cql/bath/2.0/" if they wanted to search on the holding institution?! 15:56 thd kados: in my experience textual databases are much easier to manage for many tasks 15:55 kados thd: we will still use MARC ... it's just the way that we store it that's different (in mysql or in textual form) 15:55 paul internal marc storage of biblio 15:55 kados slef: that's the 'bath context set' 15:55 paul thd : zebra will replace marc biblios. 15:55 kados slef: http://zing.z3950.org/srw/bath/2.0/#2 15:54 thd kados: but zebra would not replace marc either would it? 15:54 slef kados: can you give me an example? 15:54 kados thd: that refers to Zebra I think 15:54 thd kados: I understood well but what does this quote from the agenda mean " should it replace marc tables in Koha?" mean? 15:54 kados or we could invent our own 15:54 kados and there are a number of open context sets out there as well 15:53 kados well we could use the default 15:53 slef What namespaces might we want to use? 15:53 paul no 15:53 kados slef: I'm all ears 15:52 kados paul: have an opinion? 15:52 slef With a while longer, I may be able to express my point better, but that's not really been possible for a few weeks. 15:52 kados so our searching is extensible in other words 15:52 kados (they call them 'context sets') 15:51 slef Unfortunately, I don't know what that : style is called formally and searching for query languages brings back lots of XML-related stuff. SPARQL is all well and good, but not suitable for this. 15:51 kados slef: another strength of CQL is namespaces like in XML 15:51 kados thd: so one can't replace the other ;-) 15:51 kados thd: CQL is just a query method ... MARC is a storage method 15:51 slef The rest of searching seems to have been going towards things like author:bloggs country:uk for field searches, default to and, and simple leading - for nots. 15:50 kados (RPN being default for Z39.50) 15:50 kados and Zebra has mappings for CQL to RPN 15:50 thd CQL is great but I had been concerned about the agenda suggesting it might replace MARC rather than comlpement it 15:50 kados slef: right 15:50 slef So, users would be inputting that for complex queries? 15:50 kados the complex query syntax is there if you need it 15:49 slef Also, the more I look, the more I suspect, as the LoC CQL site doesn't seem to have useful references 15:49 kados it only requires the term 15:49 kados that's the beauty of CQL 15:49 kados nope 15:49 slef Would we be expecting end-users to construct that syntax? 15:49 kados the more I read up and learn abotu CQL the better I like it 15:49 kados slef: I know you had a concern 15:48 kados any reactions? 15:48 thd kados: yes 15:48 paul yep. 15:48 kados ok ... so can we move on to CQL? 15:47 paul but that's not so easy... 15:47 paul in marc <=> non marc mapping 15:47 paul "subject = 650$a -- 650$x -- 650 $y" 15:47 paul the best, I think, would be to be able to say 15:47 kados thd: not automatically 15:47 thd kados: will implementing zebra bring it back? 15:47 paul not so sure. 15:47 kados should be easy to bring back 15:46 kados I think we lost that in 2.2 15:46 kados yes ... Koha used to have a nice subject index when a subject search was returned 15:46 paul in Koha 2.2.x, they are poorly managed in normal view. 15:46 thd kados: if science -- methodolgy koha only sees science in the see also 15:46 kados ahh ... I see ... 15:46 paul $x Nelsonville 15:46 paul $x Ohio 15:45 paul $a USA 15:45 paul and 15:45 paul $x Marseille 15:45 paul $x France 15:45 paul $a Europe 15:45 paul for example : 15:45 paul of a subject. 15:45 paul for example, in UNIMARC, $x / $y / $z are subdivisions 15:45 kados ahh 15:45 paul he's talking about subjects splitted in more than 1 subfields. 15:45 kados thd: that seems to work ok if you've got it setup 15:44 thd kados: yes 15:44 paul no 15:44 kados thd: ahh ... you're talking about 'see also' feature in koha? 15:43 thd kados: marc 650a -- 650x -- 650y -- 650z as a compund subject 15:42 slef thd: can you give a reference? 15:42 kados thd: I'm not sure what that means 15:42 chris cool 15:41 kados chris: there's a good 'embedding zebra' document that may be a good place to start 15:41 thd sythetic 'science -- methodology' not represented well in koha 15:41 kados paul: ok ... sounds good 15:41 kados thd: not sure I understand 'synthetic subjects' 15:41 paul hdl should be able to work on this in 2 weeks 15:40 paul but my main problem, for instance, is to understand how to deal with UNIMARC... 15:40 kados paul: great! 15:40 paul about delay : during summer I hope. 15:40 thd will merely implementing allow searches and links to work properly for synthetic subjects? 15:40 kados opensearch 15:40 chris sweet 15:40 kados CQL 15:40 kados ok ... so two other points are: 15:39 kados (the indexdata folks will be at ALA in a nearby exhibit and I plan to rack their brains when things are slow) 15:38 kados I can look at zebra paramaters/customization 15:38 kados any idea of a timeframe? 15:38 chris i can help with rejigging the opac 15:38 paul but not in a short delay. 15:38 kados thanks paul 15:38 kados great! 15:38 chris excellent 15:38 paul i volunteer to take care of Biblio.pm package rewritting. 15:37 kados ok ... so who can make our ideas happen? 15:37 kados zebra will help with consortia 15:36 chris yep 15:36 kados thd: good point ... this happens already 15:36 kados consortium is a group of libraries collaborating (consortia => library => branch) 15:36 thd the book will still show on on the shelf if it is uncharged but in a patrons hands 15:36 chris multiple libraries with a unified bibliographical catalog 15:35 paul consortia ? 15:35 chris i think koha + zebra will help with consortia too 15:35 chris :) 15:35 kados heh 15:35 paul and in the mean time, the book has been issued ;-) 15:35 kados :-) 15:35 kados yep 15:34 kados yea ... I like that idea 15:34 paul when someone checks from Athens university & see "book available", he needs at least 10mn to arrive to Athens PL. 15:34 chris yep 15:34 kados slef: yep 15:34 kados (OPAC checks koha tables for searches?) 15:34 slef Are we on 1. Zebra? 15:34 kados what's the difference? 15:33 chris yeah that sounds good 15:33 chris ahh i getcha 15:33 paul checking item from opensearch or something like that means having an unperfect result maybe 15:33 chris right 15:33 kados yea ... those patron flame letters are coming in by the hundreds ;-) 15:33 paul we could have 2 behaviours : checking item from Koha opac means checking koha circ DB just after retrieving the record. 15:32 kados heh 15:32 chris they get angry and write paul a letter :-) 15:32 chris they ask a librarian, and the librarian says oh we issued it 15:32 paul not necessary. 15:32 kados right ... my fear as well 15:32 chris they cant find it 15:32 chris it says its on the shelf 15:32 chris then 2 mins later somene searches for it, on the opac 15:32 chris that i issue a book 15:31 chris ahh so theres my fears 15:31 paul because zebra index update will be run in crontab, maybe 10mn after the circ 15:31 kados the status check is very quick in SQL (it's a 'factual' dataset) 15:31 paul i think circ will be as fast as for instance. 15:31 paul about circ speed : 15:31 kados before returning results 15:31 kados I'd like to see our first implementation of Zebra 'double-check' the item statuses in Koha 15:30 chris ie, if we dont make an issue finished when the index is updated, then the index will be wrong for a period of time 15:30 chris paul: if we dont slow circ .. then is there a chance our search will return the wrong results? 15:30 kados :-) 15:30 kados you mean circ can get slower? 15:29 thd paul: I am too new to know well and too anonymous at the moment. 15:29 paul circ won't be slower chris. 15:29 kados yea .. I'm leaning towards the second one 15:29 chris but the choice will be, zebra for all (need to test .. does this make circ slower) or zebra for biblio, database for item info 15:28 chris ok, so i think a plugin will be out 15:28 thd kados: he he 15:28 kados thd: there's not much to lose ;-) 15:28 kados thd: right 15:28 thd kados: with nothing lost? 15:27 kados thd: the idea is that we would actually gain functionality with Zebra 15:27 thd what would be lost from the current search api in substituting zebra? 15:26 paul my opinion will be definetly done once we will see better how complex it's to parameter zebra... 15:26 chris i dont think we will know until we try 15:26 kados I'll third that 15:26 chris me either 15:26 paul but really not sure it will be worth the effort. 15:26 chris right 15:26 paul (internal search being with marc_word as in 2.2) 15:26 paul one with Koha internal search, one with Zebra 15:25 paul anyway, my idea would not change anything on Biblio.pm API, so we could have 2 differents Koha DB 15:25 chris yep me either 15:25 paul but i'm not sure it's the best one. 15:25 paul that was my idea 1st 15:25 kados I'd prefer to simplify things and just use one api (we're short on maintainers( 15:24 chris yep 15:24 kados because it'll mean two searching methods to maintiain 15:24 kados is what to do with the search api 15:24 kados one prob with that that I can see 15:24 chris and a cron job that updates the zebra index 15:23 chris then have some routines that search for bibliographical data using zebra, and fetch the item data from the issues and items tables 15:23 paul explain your ideas ? 15:23 chris have a systempreference, use zebra searching 15:22 kados how would that work? 15:22 chris or implement zebra as a plugin 15:22 chris pauls idea (which i like) but perhaps has some issues (circulation, tieing koha to zebra) 15:21 paul dunno in MARC21 15:21 paul in UNIMARC, we have the "recommandation 995" that deals with those informations. 15:21 paul the biggest deal i think is to store item informations. 15:21 chris the way i see it, there are 2 ways we can do this 15:21 kados welcome 15:21 chris heh 15:20 paul Biblio.pm has been made by a good coder ;-) 15:20 paul probably not so big. 15:20 paul requires some more coding to change item status in item marc record. 15:20 chris but will mean a big rewrite of the C4::Biblio eh? 15:20 paul biblio.pm being responsible to request zebra-update when one or the other is modified. 15:20 chris that sounds good paul 15:20 chris hmmm 15:20 paul and store raw iso2709 data in biblio & item tables. 15:19 kados marc_word for sure! ;-) 15:19 paul in my idea we can get rid with marc_*_table and marc_word. 15:19 chris yeah, i like the idea, im just scared it will slow circulatioon 15:19 kados so "is zebra updating fast enough not to slow down circ" 15:19 kados yep agreed 15:19 paul good point to chris, should do some tests. 15:19 chris as the status of items (on loan etc) will be changing all the time 15:19 kados chris: actually, zebra supports updating 15:18 chris the only problem i can see with that ... is that you would need to be reindexing in zebra constantly 15:18 paul and a item MARC record 15:18 paul - have a biblio MARC record 15:18 paul so we could : 15:18 paul but in Koha itself, i think we should still have both informations. 15:17 kados that would be ideal 15:17 paul so, when zebra find a record, he can return it without more code. 15:17 paul I think we should, at least in zebra. 15:17 paul so, the question is do we store both in a single MARC record or not ? 15:17 paul we have 2 different informations : biblio & item level informations. 15:16 kados yes please do 15:16 paul do you want me to explain my ideas ? 15:16 paul i wanted to write a sheet, but could not find time. 15:16 chris yep 15:16 paul i have some ideas for this. 15:16 kados IMO we need to look at what kinds of data we shoudl expect Zebra to return and what we want from the RDBMS 15:15 chris i think what we need to do now is maybe build a prototype of how it will work with koha 15:15 kados right 15:14 chris from our initial investigations, zebra looks to be fantastic for indexing and searching bibliographical data 15:13 paul slef : ok. 15:13 slef paul: index this MARC21 lbrary and tha BLMARC and that UNIMARC one and search across all. 15:13 chris theres nothing worse than the opac telling you a book is on the shelf when it isnt :) 15:13 chris not really, im just concious that we will have to write a wrapper for it, to check the stuff zebra wont check (item status) and we will need to make suer thats accurate 15:12 kados chris: are you concerned that Zebra will not be accurate? 15:11 kados 2 speed 15:11 kados 1 accuracy 15:11 kados our priorities should be like googles: 15:11 kados chris: I agree 15:10 paul what do you mean ? 15:10 paul multiple MARC types in 1 index ? 15:10 chris probably obvious, is that we have 2 audiences for the search, the opac and the librarians .. and that we dont want to sacrifice accuracy for speed 15:10 kados slef: yep 15:10 slef can it index multiple MARC types in one index? 15:09 chris a couple of points 15:09 kados chris wanna expand on that? 15:08 kados are maybe a good place to start 15:08 kados so chris's comments listed on the agenda 15:08 chris yep 15:07 paul ok for me. 15:07 kados so let's start with it 15:07 kados I think Zebra is the biggie 15:07 kados CQL 15:07 kados OpenSearch 15:07 kados Zebra 15:07 kados basically three things to cover: 15:06 kados Ok ... well let's get started then 15:06 kados FrancoisL wrote me that he couldn't make it today 15:05 kados paul, chris, slef(at dinner), owen and me 15:05 paul francoisl expected to be here, but seems we only have his computer... 15:05 kados ok ... so is anyone missing? 15:04 chris i am 15:02 kados who's present? 15:02 kados (please add to it asap if you've got something to cover that's not listed) 15:02 kados http://tinyurl.com/bw86r 15:02 kados Our agenda is here: 15:02 kados OK everyone ... welcome to our first Searching Group meeting 15:00 paul even if the biblio is in unimarc in fact. 15:00 paul and display=usmarc works fine 15:00 paul bureau.paulpoulain.com:2100/Default 14:59 kados maybe I've made a mistake 14:59 kados but paul what's the IP address, port and db name ? 14:59 kados it's http://liblime.com/zap/try.html 14:59 kados heh 14:58 paul (on zap/advanced.html, paul poulain server is up, but for a reason not clear for instance, results are not shown. results are in the other zap page -the one that joshua will remind us now ;-) ) 14:56 kados hi chris 14:56 kados Try out Zebra: http://liblime.com/zap/advanced.html 14:56 paul hello chris 14:56 chris morning 14:56 kados t-minus 4 minutes 14:54 owen paul: thanks for the reminder about the template tags and the translator. I know I have a lot of old instances of that in the NPL templates. I'll try to weed them out. 14:47 kados http://www.saas.nsw.edu.au/koha_wiki/index.php?page=AgendAndNotes05jun21 14:47 kados add your stuff to the agenda: 14:46 kados T-minus 14 minutes till Searching Group Meeting 14:46 owen Hi paul 14:46 paul hello owen. 14:44 thd Why do all Open WorldCat searches have 'england' in the query string? 14:43 kados hi paul 14:40 paul hello kados & slef 14:35 kados slef: the meeting _was_ at 19:00 ... but paul posted that he couldn't make it so we rescheduled to 20:00 (BTW: what's the diff between UTC and GMT?) 14:28 slef Thought meeting was 19:00 UTC. Read about CQL. Seems complicated compared to trends, but not looked for competitors (time! :-/ ) 14:20 kados Read up on CQL: http://www.loc.gov/z3950/agency/zing/cql/ 14:20 kados Read up on Zebra: http://indexdata.dk/zebra 14:20 kados T-minus 40 minutes to Searching Group Meeting 14:03 kados slef: http://tinyurl.com/c2ter for the time in your area 14:03 kados slef: still about an hour to go 13:14 slef has this meeting slipped by an hour?