Time |
S |
Nick |
Message |
12:58 |
|
thd |
kados: are you there? |
12:58 |
|
thd |
kados: I posted some more questions to the wiki page |
12:59 |
|
kados |
thd: I'll take a look |
13:19 |
|
tumer[A] |
kados:?? |
13:19 |
|
kados |
tumer[A]: I'm here |
13:20 |
|
tumer[A] |
do not wait for me to finish installing this debian today |
13:20 |
|
tumer[A] |
it will tkae some time |
13:20 |
|
tumer[A] |
staff is tired |
13:20 |
|
kados |
ok |
13:20 |
|
kados |
did you get the OS installed at least? |
13:20 |
|
tumer[A] |
i want you to see this |
13:21 |
|
kados |
if so, you can send me ssh login and passwd and I can ssh in and finish the install |
13:21 |
|
tumer[A] |
well not completely |
13:21 |
|
tumer[A] |
tomorrow i will send you whatever you want |
13:21 |
|
kados |
ok |
13:22 |
|
tumer[A] |
with yaz client go in library.neu.edu.tr:9999 |
13:22 |
|
tumer[A] |
find john |
13:22 |
|
tumer[A] |
format xml |
13:22 |
|
tumer[A] |
show 1 |
13:22 |
|
tumer[A] |
that is a complex record |
13:23 |
|
tumer[A] |
than do elem biblios |
13:23 |
|
tumer[A] |
show 1 |
13:23 |
|
tumer[A] |
that is a bibliographic record |
13:23 |
|
tumer[A] |
than do elem holdings |
13:23 |
|
tumer[A] |
show 1 |
13:23 |
|
tumer[A] |
that is a holdings recod |
13:23 |
|
kados |
tumer[A]: library.neu.edu.tr:9999/biblios? |
13:24 |
|
tumer[A] |
no its default |
13:24 |
|
kados |
[239] Record syntax not supported -- v2 addinfo '' |
13:24 |
|
kados |
ahh |
13:24 |
|
kados |
sorry |
13:25 |
|
kados |
why is <record> the root for the biblio record? |
13:25 |
|
kados |
but <holdings> is the root for holdings? |
13:25 |
|
kados |
should't you have <biblio><record> ? |
13:25 |
|
tumer[A] |
holdings is multiple holdings |
13:26 |
|
kados |
very cool though |
13:26 |
|
tumer[A] |
this way its fully marc compliant each <record> |
13:26 |
|
kados |
right, I see |
13:26 |
|
kados |
very nice |
13:26 |
|
kados |
so when you edit a record |
13:26 |
|
kados |
do you have to save the whole <koharecord> every time? |
13:27 |
|
kados |
or can you modify <holdings><record> individually/ |
13:27 |
|
kados |
? |
13:27 |
|
tumer[A] |
all individually editable |
13:27 |
|
kados |
sweet |
13:27 |
|
kados |
tumer rocks! |
13:27 |
|
kados |
as usual :-) |
13:28 |
|
kados |
so where is the prob? |
13:28 |
|
tumer[A] |
but ZEBRA is taking so much effort |
13:28 |
|
tumer[A] |
no prop |
13:28 |
|
kados |
ahh |
13:28 |
|
tumer[A] |
just zebra crashes |
13:28 |
|
kados |
just very slow? |
13:28 |
|
kados |
ahh |
13:28 |
|
kados |
well ... |
13:28 |
|
kados |
i think we could: |
13:28 |
|
kados |
get zebra running as yours is on linux |
13:28 |
|
kados |
write a script to simulate catalogign processes |
13:28 |
|
kados |
write another script to simulate searches |
13:29 |
|
kados |
send that stuff to ID |
13:29 |
|
kados |
and they _must_ fix it |
13:29 |
|
kados |
in 15 days no less :-) |
13:29 |
|
tumer[A] |
this way of indexing will be a litlle bit slower so says ID |
13:29 |
|
kados |
right |
13:29 |
|
kados |
because it's xpath? |
13:29 |
|
kados |
have you been exchanging email with ID? |
13:29 |
|
kados |
because I haven't gotten any of it |
13:29 |
|
tumer[A] |
but i can index multiple xmls as one bunch from zebraidx as well |
13:30 |
|
tumer[A] |
i have designed xslt sheets xsd sheets which i am going to send soon |
13:30 |
|
kados |
you can index a bunch of xml?!?! |
13:31 |
|
kados |
isn't that a feature not implemented by ID yet? |
13:31 |
|
kados |
in zoom? |
13:31 |
|
tumer[A] |
no not in zoom from zebraidx |
13:31 |
|
kados |
ahh |
13:31 |
|
kados |
still ... I thought it only worked for iso2709 |
13:31 |
|
kados |
with zebraidx |
13:31 |
|
tumer[A] |
similar to our iso2709 |
13:32 |
|
kados |
did ID implement this for you/ |
13:32 |
|
kados |
? |
13:32 |
|
tumer[A] |
its almost the same speed as iso |
13:32 |
|
kados |
ID did this? or did you? |
13:32 |
|
tumer[A] |
no its their new Alvis filter |
13:32 |
|
kados |
nice! |
13:32 |
|
kados |
so have you been emailing supportid? |
13:32 |
|
kados |
cuz I haven't gotten any ccs :( |
13:32 |
|
tumer[A] |
no just reading the lists |
13:32 |
|
kados |
ahh |
13:33 |
|
kados |
very cool ! |
13:33 |
|
tumer[A] |
i did not get any support from than |
13:33 |
|
kados |
k |
13:33 |
|
kados |
tumer[A]: i also have a bit of news |
13:33 |
|
tumer[A] |
apart from mike saying i cannot merge lists |
13:33 |
|
kados |
http://wiki.koha.org/doku.php?[…]raprogrammerguide |
13:34 |
|
kados |
check the Field weighting section |
13:34 |
|
kados |
and the Multiple Databases section too |
13:34 |
|
kados |
field weighting is really powerful! |
13:34 |
|
tumer[A] |
kados:i played with those but could get much use out of them |
13:34 |
|
tumer[A] |
specially the multiple database did not help |
13:34 |
|
kados |
it's useful because you can tell zebra: |
13:35 |
|
kados |
do a search on exact title and title for 'it' |
13:35 |
|
tumer[A] |
i did not use weighting |
13:35 |
|
kados |
weight by exact title |
13:35 |
|
kados |
so the ones with exact title 'it' come first |
13:35 |
|
kados |
I'm going to write a new CCL parser |
13:35 |
|
tumer[A] |
very cool |
13:35 |
|
kados |
that transforms every CCL query into a PQF query that deals with weight |
13:36 |
|
kados |
so the librarians can specify where to weight the query |
13:36 |
|
tumer[A] |
i thoght this section is "do not use in production yet!" |
13:36 |
|
kados |
no, it's in 1.3, so it's stable |
13:49 |
|
tumer[A] |
kados:one resaon for me to split the record like this is because i am going to prevent union catalog reaching holdings section |
13:50 |
|
kados |
tumer[A]: right, makes really good sense |
13:50 |
|
tumer[A] |
that part contains lots of nonpublic notes |
13:50 |
|
kados |
tumer[A]: so they are saved separately? |
13:50 |
|
kados |
tumer[A]: there are three indexes? |
13:50 |
|
kados |
tumer[A]: I'm not up on the avis filter |
13:50 |
|
tumer[A] |
no one index , one kohacollection record |
13:51 |
|
tumer[A] |
differnent xslt sheets |
13:51 |
|
tumer[A] |
the default sheet will only show the biblios |
13:51 |
|
tumer[A] |
withot saying elem biblios |
13:52 |
|
kados |
ahh ... |
13:52 |
|
dewey |
ahh ... is that how 'snapshots' are done? |
13:52 |
|
kados |
wow, that's really nice |
13:52 |
|
tumer[A] |
other sheets will be out of bounds except from within koha |
13:52 |
|
kados |
we could do _anything_! |
13:52 |
|
tumer[A] |
yep |
13:52 |
|
kados |
we could have a MODS stylesheet |
13:52 |
|
kados |
or dublin core! |
13:52 |
|
kados |
holy shit! |
13:52 |
|
tumer[A] |
alraedy have it |
13:52 |
|
kados |
holy shit! |
13:52 |
|
tumer[A] |
DC-MODS -MADS |
13:53 |
|
kados |
hehe |
13:53 |
|
kados |
ok ... |
13:53 |
|
tumer[A] |
i am dropping because of fatique |
13:53 |
|
kados |
tumer[A]: get some sleep man :-) |
13:54 |
|
kados |
tumer gets major props |
13:54 |
|
tumer[A] |
its taking lots of time to design the indexing sheets |
13:54 |
|
kados |
owen: so ... currently, we have: |
13:54 |
|
tumer[A] |
i do not know a word of xslt |
13:54 |
|
kados |
IP address, port, and database |
13:54 |
|
kados |
so you can connect to a database and run queries |
13:55 |
|
kados |
owen: tumer now has a filter added to the mix |
13:55 |
|
kados |
owen: so instead of just pulling out raw marc |
13:55 |
|
kados |
owen: we can pull out any xmlish data we want |
13:55 |
|
kados |
owen: just by specifing an xslt filter |
13:55 |
|
owen |
when connecting to what database? |
13:55 |
|
kados |
any of them |
13:55 |
|
kados |
so instead of: |
13:55 |
|
kados |
pulling out a MARC record |
13:56 |
|
kados |
creating a MARC::Record object for it |
13:56 |
|
kados |
passing it in |
13:56 |
|
kados |
looping through, getting out the good data for display |
13:56 |
|
kados |
passing to the tempalte as a loop |
13:56 |
|
kados |
writing html int he template |
13:56 |
|
kados |
we can: |
13:56 |
|
kados |
* query zebra for the data using a stylesheet |
13:56 |
|
kados |
* display it directly in HTML on the page |
13:57 |
|
kados |
all we need to do is have a xslt stylesheet defined for turning |
13:57 |
|
kados |
MARC into HTML |
13:57 |
|
kados |
this is groundbreaking stuff |
13:57 |
|
owen |
Would you still need to pass that final-stage HTML to a template somehow? |
13:57 |
|
kados |
yea |
13:57 |
|
kados |
but not as a loop |
13:57 |
|
kados |
just as a variable |
13:57 |
|
owen |
Just as a chunhk |
13:57 |
|
kados |
yep |
13:58 |
|
owen |
Swank. |
13:58 |
|
kados |
so the labels would be 100% customizatble |
13:58 |
|
kados |
especially if the xslt was in turn a syspref :-) |
13:58 |
|
owen |
I mean crap. Now I gotta learn XSLT. |
13:58 |
|
kados |
yea, you gotta :-) |
13:59 |
|
tumer[A] |
and owen please do i am trying xslt on trail and error basisi |
13:59 |
|
kados |
so we can have all kinds of filters |
13:59 |
|
kados |
one for OPAC display (maybe with certain fields hidden) |
13:59 |
|
kados |
one for Intranet Display |
13:59 |
|
kados |
one for the MARC editor |
14:00 |
|
kados |
hehe |
14:00 |
|
kados |
one for RSS |
14:00 |
|
kados |
one for DC, one for MODS |
14:03 |
|
thd |
kados: so this is what we had wanted |
14:03 |
|
kados |
thd: yep :-) |
14:04 |
|
thd |
kados: the only drawback is the size of XML for its impact on performance when exchanging XML files across the network |
14:05 |
|
thd |
kados: I think that the major performance issue for the record editor is all the XML fields taking up so many bytes when transferred |
14:07 |
|
thd |
kados: can we compress the XML before sending it without having redesigned basic protocols like Pines? |
14:08 |
|
kados |
thd: yes |
14:08 |
|
kados |
thd: JSON |
14:08 |
|
kados |
thd: piece of cake |
14:10 |
|
thd |
kados: what does JSON have to do with compression? |
14:10 |
|
kados |
thd: json is essentially compressed XML |
14:11 |
|
kados |
thd: it's what Evergreen uses |
14:11 |
|
tumer[A] |
kados:here is the record schema http://library.neu.edu.tr/kohanamespace/ |
14:14 |
|
kados |
k |
14:14 |
|
tumer[A] |
before i continue with designing the rest we need to aggree on the record design |
14:15 |
|
kados |
ok ... |
14:15 |
|
kados |
I have one question before we continue |
14:15 |
|
tumer[A] |
its just an extension of MARC21XML as descbribed in loc.gov |
14:15 |
|
kados |
right |
14:15 |
|
kados |
a superset of it, right? |
14:15 |
|
tumer[A] |
rigt |
14:15 |
|
kados |
so my question is |
14:15 |
|
tumer[A] |
right rather |
14:16 |
|
kados |
can we at the same time, do 'duplicate detection'? |
14:16 |
|
kados |
tumer[A]: do you understand what I mean? |
14:16 |
|
tumer[A] |
duplicate of what? |
14:16 |
|
kados |
in other words ... what about having: |
14:16 |
|
kados |
<koharecord> |
14:16 |
|
kados |
<bibliorecord> |
14:16 |
|
kados |
<record> |
14:16 |
|
kados |
</record> |
14:16 |
|
kados |
<holdingsrecord> |
14:17 |
|
kados |
</holdingsrecord> |
14:17 |
|
kados |
</bibliorecord> |
14:17 |
|
kados |
<bibliorecord> |
14:17 |
|
kados |
etc. |
14:17 |
|
kados |
so we not only group holdings within a biblio ... we also group biblios withing a koharecord |
14:17 |
|
kados |
that way, when I search on 'tom sawyer' |
14:18 |
|
kados |
the 'koharecord' will pull up that title, with multiple MARC records beneath it |
14:18 |
|
kados |
does that make sense? |
14:18 |
|
tumer[A] |
it does but very complicated |
14:18 |
|
kados |
yes I agree |
14:18 |
|
kados |
we may be able to use the FRBR algorithm |
14:18 |
|
kados |
if it's too complicated, we can consider it for 4.0 |
14:19 |
|
tumer[A] |
FRBR ? |
14:19 |
|
kados |
http://www.oclc.org/research/projects/frbr/ |
14:19 |
|
kados |
Functional Requirements for Bibliographic Records |
14:19 |
|
tumer[A] |
ahh yes i saw that |
14:19 |
|
kados |
tumer[A]: I'm just throwing this idea out there |
14:20 |
|
kados |
tumer[A]: just brainstorming ... so feel free to call me crazy :-) |
14:20 |
|
tumer[A] |
well its beyond me for the moment |
14:20 |
|
kados |
k |
14:20 |
|
kados |
no prob |
14:20 |
|
tumer[A] |
currently we have: |
14:20 |
|
tumer[A] |
<kohacollection> |
14:20 |
|
tumer[A] |
<koharecord> |
14:21 |
|
tumer[A] |
<recordMARC21> |
14:21 |
|
tumer[A] |
</> |
14:21 |
|
tumer[A] |
<holdings> |
14:21 |
|
tumer[A] |
<recordMARC21holdings> |
14:22 |
|
tumer[A] |
</> |
14:22 |
|
tumer[A] |
<recordMARC21holdings> |
14:22 |
|
tumer[A] |
</> |
14:22 |
|
tumer[A] |
</koharecord> |
14:22 |
|
tumer[A] |
</kohacollection> |
14:22 |
|
kados |
right |
14:22 |
|
tumer[A] |
kohacollection can take many koharecords |
14:23 |
|
tumer[A] |
and index them all at once with zebraidx |
14:23 |
|
kados |
nice |
14:24 |
|
tumer[A] |
but to join all Tom sowyers together is perls job |
14:24 |
|
kados |
yes |
14:24 |
|
kados |
but my idea was not to join all tom sawyers together in a single recordMARC21 |
14:24 |
|
kados |
ie, the records _have_ to be separate |
14:25 |
|
tumer[A] |
yes but everytime we add a new tom sawyer we have to find the previous and join them |
14:26 |
|
tumer[A] |
anyway you brew on that i go to sleep |
14:28 |
|
tumer[A] |
night all |
14:30 |
|
thd |
kados: I wonder if putting everything that might be put into a single XML record make the XSLT too inefficient |
14:31 |
|
thd |
kados: I was disconnected for the best discussion on #koha yet |
14:31 |
|
kados |
here's my idea: |
14:31 |
|
kados |
<kohacollection> |
14:31 |
|
kados |
<biblio id="1"> |
14:31 |
|
kados |
<biblioitem id="1"> |
14:31 |
|
kados |
<recordMARC21/> |
14:31 |
|
kados |
<item> |
14:31 |
|
kados |
<recordMARC21holdings id="1"/> |
14:31 |
|
kados |
<recordMARC21holdings id="2"/> |
14:32 |
|
kados |
</item> |
14:32 |
|
kados |
</biblioitem> |
14:32 |
|
kados |
<biblioitem id="2"> |
14:32 |
|
kados |
<recordMARC21/> |
14:32 |
|
kados |
<item> |
14:32 |
|
kados |
<recordMARC21holdings id="3"/> |
14:32 |
|
kados |
<recordMARC21holdings id="4"/> |
14:32 |
|
kados |
</item> |
14:32 |
|
kados |
</biblioitem> |
14:32 |
|
kados |
</biblio> |
14:32 |
|
kados |
</kohacollection> |
14:34 |
|
thd |
kados: i think you could add duplicates of authority records and solve the authority indexing problem in Zebra |
14:34 |
|
kados |
could be |
14:35 |
|
kados |
here's a better scheme: |
14:35 |
|
kados |
<kohacollection> |
14:35 |
|
kados |
<biblio id="1"> |
14:35 |
|
kados |
<biblioitem id="1"> |
14:35 |
|
kados |
<recordMARC21/> |
14:35 |
|
kados |
<recordMARC21holdings id="1"/> |
14:35 |
|
kados |
<recordMARC21holdings id="2"/> |
14:35 |
|
kados |
</biblioitem> |
14:35 |
|
kados |
<biblioitem id="2"> |
14:35 |
|
kados |
<recordMARC21/> |
14:35 |
|
kados |
<recordMARC21holdings id="3"/> |
14:35 |
|
kados |
<recordMARC21holdings id="4"/> |
14:35 |
|
kados |
</biblioitem> |
14:35 |
|
kados |
</biblio> |
14:35 |
|
kados |
</kohacollection> |
14:35 |
|
thd |
kados: can everyone afford the CPU to parse very large XML records under a heavy load? |
14:35 |
|
kados |
thd: parsing isn't too bad |
14:35 |
|
kados |
thd: it's transport that kills you |
14:36 |
|
thd |
kados: yes while I was disconnected you missed my posts about transport |
14:36 |
|
kados |
so we do simple client detection ... if they have js, we pass the xml directly to the browser as JSON and let the browser parse it |
14:36 |
|
kados |
otherwise, we parse it client side and just pass html |
14:37 |
|
kados |
to the browser |
14:37 |
|
thd |
kados: if we could digress back to the dull issue of transport for a moment |
14:37 |
|
thd |
as you already have |
15:47 |
|
thd |
kados: one moment while I check the log for my posts about transport while disconnected |
15:48 |
|
kados |
ok |
15:51 |
|
thd |
kados: so there is a method for transforming XML into JSON and then transforming it back to XML again losslessly? |
15:52 |
|
thd |
kados: maybe there is no difference in what is transmitted for use by the editor because that is always the same size data for building an HTML page in JavaScript whether it starts as ISO2709 or starts as MARC-XML |
15:54 |
|
thd |
kados: what exactly is the advantage of passing data to the browser in JSON? |
15:54 |
|
thd |
kados: are you still here? |
15:55 |
|
kados |
thd: if the client has javascript, the advantage is that the xslt processing can be done client-side |
15:55 |
|
kados |
thd: and the transport of HTML + JSON is much less than HTML + MARCHTML |
15:56 |
|
thd |
kados: is the difference large? |
15:56 |
|
kados |
well ... |
15:56 |
|
kados |
yes |
15:56 |
|
thd |
well you did say much less |
15:57 |
|
kados |
probably on average HTML + JSON will be about 20% the size of HTML + MARCHTML |
15:57 |
|
thd |
kados: so does that raise the CPU requirements or RAM requirements of the client to process the XSLT efficiently |
15:58 |
|
kados |
thd: not by much |
15:58 |
|
kados |
thd: demo.gapines.org |
15:58 |
|
kados |
thd: does that work for you? |
16:00 |
|
thd |
kados: my suspicion is that they might be down for the client over processing 80% larger HTML + MARCHTML |
16:01 |
|
thd |
s/80%/40%/ |
16:01 |
|
thd |
s/80%/400%/ |
16:01 |
|
thd |
s/80%/500%/ |
16:01 |
|
kados |
thd: does demo.gapines.org work well for you? |
16:01 |
|
kados |
thd: the whole interface is client-side using JSON |
16:02 |
|
thd |
kados: does that not require a download first? |
16:02 |
|
thd |
kados: do I not have to install some XUL? |
16:02 |
|
kados |
nope, not for the opac |
16:03 |
|
kados |
just have javascript turned on in your browser |
16:03 |
|
thd |
kados: OK so yes the OPAC works but it is rather slow for features that certainly have no need of being client side |
16:04 |
|
kados |
thd: well whether to do it client-side sometimes could certainly be a syspref |
16:04 |
|
thd |
kados: I expect it is much faster if it still works with JavaScript off |
16:04 |
|
kados |
thd: i am 100% comitted to having a 'no javascript' option |
16:05 |
|
kados |
thd: maybe faster on your machine, but definitely not on mine |
16:06 |
|
thd |
kados: I wondered every time the correct tab disappeared after I used the back function in my browser on zoomopac.liblime.com |
16:08 |
|
thd |
kados: I am accustomed to finding the form I had used with the old values for changing to do a new query, although, a session state could store the current query or link to a change your query option |
16:09 |
|
thd |
kados: so there is no problem about recovering the original MARC-XML from JSON? |
16:09 |
|
kados |
no |
16:09 |
|
kados |
JSON is identical to XML in terms of storage capabilities |
16:09 |
|
thd |
kados: we will have a one to one element to element correspondence? |
16:10 |
|
kados |
no |
16:10 |
|
kados |
JSON is just a compressed version of XML |
16:10 |
|
kados |
it's identical in capabilities |
16:11 |
|
thd |
kados: lossless compression? you answered no to both questions just now. Did you mean to answer no the second time? |
16:11 |
|
kados |
lossless compression |
16:11 |
|
kados |
there is no problem about recovering the original MARC-XML from JSON |
16:11 |
|
kados |
we will not have a one to one element to element correspondence |
16:12 |
|
kados |
JSON is lossless compression of XML |
16:12 |
|
thd |
kados: how can both of those statements be true? |
16:12 |
|
kados |
? |
16:13 |
|
kados |
thd: do some reading on JSON, i don't have time to explain it all right now :-) |
16:13 |
|
thd |
kados: do you have time to discuss something more exciting |
16:14 |
|
thd |
? |
16:14 |
|
thd |
kados: by which I mean FRBR etc. in XML? |
16:15 |
|
kados |
sure |
16:16 |
|
kados |
but I don't think that's so simple unfortunately |
16:16 |
|
thd |
kados: ok,: having exploded once already today |
16:16 |
|
kados |
because there is no one-to-one correspondance between MARC and any of the functional levels in FRBR |
16:16 |
|
thd |
kados: not simple, therefore, fun |
16:16 |
|
kados |
which is why FRBR sucks |
16:17 |
|
thd |
kados: you mean which is why MARC sucks |
16:17 |
|
kados |
so to get FRBR working, you have to break MARC |
16:17 |
|
kados |
yea ... that's what I mean :-) |
16:17 |
|
kados |
but a FRBR library system couldn't use MARC other than on import |
16:17 |
|
kados |
you can't go from MARC to FRBR then back to MARC |
16:17 |
|
kados |
it's a one way trip |
16:18 |
|
thd |
kados: you just need a good enough model and a large amount of |
16:18 |
|
thd |
s/model/meta-model/ |
16:19 |
|
thd |
batched CPU time to find the FRBR relations in the data set |
16:19 |
|
kados |
thd: but where do you store those relations? |
16:20 |
|
kados |
not in the MARC data |
16:20 |
|
kados |
you have to have a FRBR data model |
16:20 |
|
kados |
that is separate from your records |
16:20 |
|
kados |
and used only for searching |
16:20 |
|
thd |
kados: and just when you thought that was enough there is FRAR and FRSR |
16:20 |
|
kados |
what are those? :-) |
16:20 |
|
kados |
authories and serials? |
16:20 |
|
kados |
shit |
16:21 |
|
kados |
librarians making standards-- |
16:21 |
|
thd |
name and subject authority relations respectively although I am not perfectly confident about the acronyms |
16:23 |
|
thd |
kados: so right MARC 21 does not do authority control on all the needed elements often enough in the case of uniform titles or ever in many other cases |
16:24 |
|
thd |
kados: but you do not need a place in MARC to store the relations because you can store them in your XML meta-format |
16:26 |
|
thd |
kados: then you can change them easily by script in a batch process as you perfect your relation matching algorithm for overcoming what cataloguers never recorded explicitly |
16:27 |
|
thd |
or never recorded consistently in controlled fields |
16:27 |
|
kados |
right |
16:27 |
|
kados |
well if you come up with a xml format for storing FRBR |
16:28 |
|
kados |
and a script to create FRBR |
16:28 |
|
kados |
from MARC |
16:28 |
|
kados |
I'll write the OPAC for that :-) |
16:29 |
|
thd |
kados: I think if we have a reasonable place for storing the relations in a meta-record even if we have no good enough script yet we can experiment by degrees |
16:29 |
|
kados |
well ... we're gonna need some data |
16:29 |
|
kados |
but I suppose we could start with like 5-6 records manually created |
16:30 |
|
thd |
kados: we would actually have a working system that could provide the basis for the experiment rather than building one later and reinventing the meta-record model later |
16:30 |
|
thd |
kados: how is the foundation coming along? |
16:30 |
|
kados |
no news yet |
16:33 |
|
thd |
kados: so the data for individual bibliographic records can stay in MARCXML while the relations are stored in a larger meta-records |
16:33 |
|
kados |
hmmm |
16:33 |
|
kados |
but you still have to search on the FRBR dataset |
16:34 |
|
kados |
you can't just store the relations |
16:34 |
|
thd |
kados: because of the current limitation on Zebra indexing we have to store all immediately linked records together in one meta-record |
16:35 |
|
thd |
kados: meta records need to be work level records because of current Zebra indexing limitations |
16:37 |
|
thd |
kados: and they need to contain all lower levels and linked authority records within them |
16:39 |
|
thd |
kados: you have to search a database full of bibliographic records for the matching records at various levels and then test them for true matches first |
16:40 |
|
thd |
kados: the search will may not be sufficient in itself |
16:41 |
|
thd |
kados: you have to compare likely candidates for satisfying some MARC test |
16:42 |
|
thd |
s/MARC/FRBR level/ |
16:43 |
|
thd |
kados: so initially your meta-records would be mostly empty place holders for where you would eventually store matching records |
16:45 |
|
thd |
kados: yet if you have the system supporting the structure for the XML meta-record you do not have to write a completely new system when you have perfected your record matching script |
16:45 |
|
kados |
right |
16:45 |
|
kados |
but we need to: |
16:45 |
|
kados |
1. create a XML version of FRBR that we can index with Zebra |
16:46 |
|
thd |
kados: if you have to write a new system to do something useful with the experiment you will be much further from the goal |
16:46 |
|
kados |
2. create some example FRBR records in that structure |
16:46 |
|
kados |
3. define some indexes for the structure |
16:46 |
|
kados |
4. write a OPAC that can search those indexes |
16:46 |
|
kados |
i can do 3, 4 |
16:46 |
|
kados |
but not 1, 2 |
16:47 |
|
kados |
so if you do 1, 2, I'll do 3, 4 :-) |
16:47 |
|
kados |
but now I need to get dinner |
16:47 |
|
kados |
I'll be back later |
16:47 |
|
thd |
kados: you left out FRAR and FRSR |
16:49 |
|
kados |
:-) |
16:49 |
|
kados |
be back later |
16:49 |
|
thd |
when is later? |
16:49 |
|
kados |
thd: an hour maybe? |
16:49 |
|
kados |
but I won't have much time to chat ... I've got a ton of work to do |
16:50 |
|
thd |
we both have a ton of work |
18:50 |
|
kados |
thd: are you back? |
18:51 |
|
kados |
thd: got an authorities question |
18:51 |
|
kados |
thd: http://opac.smfpl.org/cgi-bin/[…]thorities-home.pl |
18:51 |
|
kados |
thd: do a Personal Name search on 'Twain, Mark' |
19:17 |
|
ai |
morning |
19:18 |
|
ai |
can anyone give me an ideal how to config ldap with koha plz |
20:03 |
|
thd |
kados: I had to buy another fan |
20:08 |
|
thd |
kados: am I looking at uniform title authorities? |
20:13 |
|
kados |
thd: I'm here |
20:17 |
|
thd |
100 10 $a Twain, Mark, $d 1835-1910. $t Celebrated jumping frog of Calaveras County. $l French & English |
20:18 |
|
thd |
kados: that is from the name/title index |
20:22 |
|
thd |
kados: so I think the issue is that what you have are postcoordinated authority headings which are not in NACO or SACO |
20:23 |
|
thd |
kados: the more I think about the super meta-record the more I like it |
20:23 |
|
thd |
kados: i think it can solve multi-MARC koha as well |
20:25 |
|
thd |
kados: and multiple names, subject, etc. authorities databases from different languages |
20:26 |
|
kados |
yea, it might |
20:26 |
|
thd |
kados: it would not solve the issues intrinsically but provide facility for a system that could solve them in due course |
20:26 |
|
kados |
yep |
20:26 |
|
thd |
\ |
20:27 |
|
thd |
kados: so what I imagine is a Zebra database of super meta-records |
20:28 |
|
thd |
a separate DB of MARC 21 bibliographic records |
20:29 |
|
thd |
a separate DB of MARC 21 authority records |
20:30 |
|
thd |
a separate DB of the same again every other flavour of MARC |
20:31 |
|
thd |
a separate DB of Dublin Core records |
20:31 |
|
thd |
I left out holding records above |
20:31 |
|
thd |
a separate DB of OAI records |
20:32 |
|
thd |
a separate DB of Onyx records |
20:32 |
|
thd |
etc. |
20:33 |
|
kados |
thd: there is no need to have them separate |
20:33 |
|
kados |
thd: xslt can do transformations on the fly |
20:34 |
|
thd |
kados: well no but I think if you kept your sources separate then you would be better able to identify your source of error |
20:35 |
|
thd |
kados: you would not want to have your super meta-records coming up along with the source records you were trying to add to them or the other way around |
20:38 |
|
thd |
kados: I suppose you could control that with an indexed value in the meta records but certainly you need to keep the different MARC flavour source records in different DBs because you cannot reliably tell them apart |
20:39 |
|
thd |
kados: I think we should create a wiki scratch pad for the super meta-record format and the DB design and invite public comment |
20:40 |
|
thd |
kados: we need a good design quickly because tumer has a single focus and is going to implement something immediately |
20:41 |
|
thd |
kados: after he implements he will not have much desire to change things that he does not know that he needs |
20:42 |
|
thd |
kados: comment? |
20:44 |
|
thd |
kados: can we index different XML paths differently? |
20:48 |
|
thd |
kados: i mean <whatever><bibliographic><syntax name="some_syntax"><100> differently indexed from <whatever><bibliographic><syntax name="other_syntax"><100> ? |
21:17 |
|
ai |
any idea how to make the ldap authentication on koha? |
21:18 |
|
ai |
please |
21:18 |
|
russ |
ai: i can't help you, but have you tried the koha mailing list? |
21:19 |
|
russ |
i have see a number of posts re ldap over the past couple of weeks |
21:21 |
|
russ |
there is a thread here |
21:21 |
|
russ |
http://lists.katipo.co.nz/pipe[…]/2006/009750.html |
21:21 |
|
ai |
thanks |
21:22 |
|
ai |
russ |
21:22 |
|
ai |
have U ever try that? |
21:23 |
|
russ |
nope like i say not a tech person |
21:23 |
|
ai |
i can see there r 2 authen.pm file |
21:23 |
|
ai |
1 for ldap |
21:23 |
|
ai |
1 for normal |
21:23 |
|
ai |
r we just change the name around? |
21:23 |
|
ai |
oki |
21:23 |
|
ai |
cheers |
21:24 |
|
russ |
http://www.koha.org/community/mailing-lists.html |
21:41 |
|
thd |
ai: I do not know either because I do not use LDAP but the code for LDAP support has been much improved in the current cvs development |
21:43 |
|
thd |
ai: there is a thread on the koha list or koha-devel list in the last circa 9 months where someone solved problems with the implementation after much frustration with the original non-standard manner in which LDAP was implemented |
21:45 |
|
thd |
ai: the originator of the thread solved the problems and provided new code to fix them |
00:05 |
|
kados |
sorry for the bugzilla spam everyone |
00:06 |
|
kados |
I've gone over every single bug |
00:06 |
|
kados |
(not enhancements) |
00:06 |
|
chris |
no need to apologise for that |
00:06 |
|
kados |
and cleaned everything up |
00:06 |
|
chris |
thanks heaps for doing it |
00:06 |
|
kados |
I think we've got a managable set to work with |
00:06 |
|
kados |
48 total remain |
00:06 |
|
chris |
cool |
00:07 |
|
kados |
that includes all versions < branch 2.2 |
00:07 |
|
chris |
excellent |
00:07 |
|
kados |
15 blockers |
00:08 |
|
chris |
right, we should squish those before 2.2.6 if we can |
00:08 |
|
kados |
definitely IMO |
00:08 |
|
kados |
I wrote a mail to paul |
00:08 |
|
kados |
and the list |
00:08 |
|
kados |
requesting this |
00:09 |
|
chris |
cool |
00:09 |
|
kados |
right ... time for a midnight snack :-) |
00:10 |
|
mason |
stay away from the biscuits! |
00:10 |
|
chris |
or cookies (for the north american audience) |
00:11 |
|
kados |
hehe |
00:55 |
|
kados |
chris, you about? |
00:55 |
|
kados |
mason, you too? |
00:56 |
|
kados |
just for the heck of it, i found the old version of PDF::API |
00:56 |
|
kados |
and took a look at the old barcode system |
00:56 |
|
kados |
it's got some nice features |
00:56 |
|
kados |
http://koha.afognak.org/cgi-bi[…]codes/barcodes.pl |
00:57 |
|
kados |
if you want to play with it |
00:57 |
|
kados |
(seems to even work) |
00:58 |
|
kados |
anyway, thought maybe the printer config and some of the js used on that page might be useful |
01:52 |
|
kados |
paul_away: you up yet? |
01:52 |
|
kados |
in a few minutes I bet :-) |
01:53 |
|
Burgundavia |
kados: what are you doing up? |
01:53 |
|
kados |
Burgundavia: I might ask the same of you :-) |
01:55 |
|
kados |
Burgundavia: just hacking away as usual |
01:59 |
|
Burgundavia |
kados: it is only midnight here, unlike in ohio |
02:04 |
|
osmoze |
hello #koha |
02:06 |
|
kados |
hi osmoze |
02:06 |
|
osmoze |
:) |
02:08 |
|
kados |
well ... I'm tired now :-) |
02:08 |
|
kados |
I will be back tomorrow, but I have a meeting in the am, so later in the day ... may miss paul |
02:09 |
|
kados |
paul_away: when you arrive, please weed through bugzilla spam and fine my mail about 2.2.6 release |
02:09 |
|
kados |
s/fine/find/ |
02:34 |
|
thd |
kados: are you still up? |
02:39 |
|
hdl |
hi |
02:42 |
|
Burgundavia |
thd: 00:08 <kados> I will be back tomorrow |
02:43 |
|
Burgundavia |
that was about 30 mins ago |
02:44 |
|
thd |
Burgundavia: is tomorrow today or does kados know what day it is? |
02:53 |
|
Burgundavia |
thd: that would be today, north american time |
02:53 |
|
Burgundavia |
the 3rd |
02:54 |
|
thd |
Burgundavia: does kados know that today is today or did he think that 30 minutes ago was yesterday still? |
02:55 |
|
thd |
even if it was not actually yesterday still in his time zone |
02:57 |
|
Burgundavia |
thd: by tomorrow I assume he meant in about 9/10 hours from now |
03:01 |
|
thd |
Burgundavia: I often pay no attention to timezones or time. |
03:01 |
|
thd |
unless I have to do |
03:01 |
|
Burgundavia |
work in open source for long enough and you have to get the concept |
03:40 |
|
hdl |
paul : working on reports today. |
03:41 |
|
hdl |
kados : see my response to your acquisition bug : Does that help ? |
03:45 |
|
paul |
hdl : merci de t'occuper de certains bugs. je travaille sur les droits |
03:45 |
|
hdl |
droits ? |
03:51 |
|
paul |
#1039 |
03:51 |
|
paul |
on se dit ce que l'on nettoie au fur et à mesure qu'on s'y colle, OK ? |
04:45 |
|
thd |
paul: are you there? |
04:46 |
|
paul |
yep |
04:47 |
|
thd |
paul: did you read the logs about meta-records yesterday? |
04:47 |
|
paul |
thd: nope |
04:48 |
|
thd |
paul: I think Koha can save the world |
04:48 |
|
thd |
paul: http://wiki.koha.org/doku.php?[…]er_meta_record_db |
04:48 |
|
paul |
koha can save the world ? I thought this has been done 2006 years ago ... |
04:49 |
|
thd |
paul: well save it again |
04:49 |
|
paul |
lol |
04:51 |
|
thd |
paul: I have become too tired to start writing a DTD outline |
04:51 |
|
paul |
so, go to bed, i't's almost time for frenchies to go for lunch |
04:51 |
|
thd |
paul: yes |
04:52 |
|
thd |
paul: look at the logs for discussion between tumer and kados about meta-records while I was disconnected |
04:53 |
|
thd |
yesterday |
04:54 |
|
thd |
paul: it culminated in kados explodes (with joy) |
06:44 |
|
kados |
paul, you around? |
06:44 |
|
kados |
I'm trying to get acquisitions working |
06:49 |
|
kados |
paul: still, after closing any basket, when I click on 'receive', and enter in a 'parcel code' i don't get any biblios listed |
06:49 |
|
kados |
paul: you can try it here: http://koha.smfpl.org |
06:49 |
|
kados |
paul: in both default and npl the behavior is the same |
06:51 |
|
kados |
hmmm ... |
06:51 |
|
kados |
now I try searching from the receipt page and I find an item I ordered |
06:52 |
|
kados |
but there is no submit button to save it |
06:53 |
|
kados |
so I click on edit, and now I'm back where I started it seems (even though this basket is closed) |
07:30 |
|
hdl |
kados |
07:51 |
|
paul |
hello dewey |
07:51 |
|
paul |
dewey : who is kados |
07:51 |
|
dewey |
rumour has it kados is becoming a true Perl Monger |
07:51 |
|
paul |
dewey who is paul ? |
07:51 |
|
dewey |
you are preparing to issue a release while the NPL templates are not working for the record editor now. |
07:52 |
|
toins |
hello dewey |
07:52 |
|
dewey |
salut, toins |
07:52 |
|
paul |
(/me is doing a demo for it's neveu) |
07:53 |
|
hdl |
dewey : you ugly one !! |
07:53 |
|
dewey |
hdl: what? |
07:54 |
|
toins |
dewey, tranlate from french hello |
07:54 |
|
dewey |
toins: i'm not following you... |
07:54 |
|
toins |
dewey, translate from french bonjour |
07:54 |
|
dewey |
toins: hello |
08:13 |
|
kados |
paul: and hdl: I have updated smfpl with paul's recent commit |
08:13 |
|
paul |
kados |
08:13 |
|
paul |
just commited 1st fix to acquisition problem |
08:13 |
|
paul |
the 2nd one arriving in the next minutes |
08:14 |
|
paul |
(i've found where it comes from) |
08:14 |
|
kados |
great ... the first one means now I see pending orders!! |
08:14 |
|
kados |
wohoo! |
08:15 |
|
kados |
and I can 'receive' a title, and save it ... wohoo! |
08:15 |
|
paul |
ok, 2nd part fixed too (& commited) |
08:16 |
|
kados |
what is the second part? |
08:16 |
|
kados |
javascript errors? |
08:16 |
|
paul |
I think we will have to get rid with this strange "catview" |
08:16 |
|
paul |
no, when you search for an order in the form |
08:16 |
|
kados |
ahh |
08:16 |
|
paul |
(it worked when you selected from the pending list, but not when you searched) |
08:16 |
|
kados |
ok ... |
08:16 |
|
paul |
that's why I missed your problem |
08:16 |
|
kados |
smfpl is updated |
08:17 |
|
paul |
thx to have made a very detailled wlakthrough, otherwise I could have investigated a lot ! |
08:17 |
|
paul |
bug marked "fixed". |
08:17 |
|
kados |
thanks |
08:17 |
|
kados |
I'll update npl templates |
08:18 |
|
kados |
paul: so the fix is to just delete the catview? |
08:19 |
|
paul |
yep. |
08:19 |
|
paul |
I don't know/think the catview is useful |
08:19 |
|
kados |
ok |
08:23 |
|
kados |
paul: the acqui search for 'from existing record' is a title search, right? |
08:23 |
|
paul |
iirc yes |
08:23 |
|
kados |
ok ... I will update my wiki |
08:24 |
|
paul |
(and not a catalogsearch one. It's just a select from biblio where title like "...) |
08:24 |
|
kados |
ahh, ok |
08:27 |
|
kados |
paul: did we agree at dev week to have rel_2_2 use tabs in authorities editor? |
08:27 |
|
kados |
paul: i can't remember |
08:27 |
|
kados |
because it's quite hard to use authority editor |
08:28 |
|
kados |
(even without the bugs I reported, it's hard ) |
08:30 |
|
kados |
ok guys, I have to get to a meeting |
08:30 |
|
kados |
I'll be back in several hours |
08:30 |
|
kados |
thanks for bugfixing! |
09:15 |
|
johnb_away |
/nick johnb |
10:22 |
|
paul |
hello owen |
10:22 |
|
owen |
Hi paul |
10:23 |
|
owen |
Everyone's busy busy busy these days! |
10:23 |
|
paul |
kados & hdl/me are doing a competition : he open bugs, we close them. |
10:23 |
|
owen |
:D |
10:23 |
|
paul |
the 1st who resign has won :-D |
10:57 |
|
thd |
paul: I left out the most important part of the purpose section previously at http://wiki.koha.org/doku.php?[…]er_meta_record_db |
10:57 |
|
thd |
kados: see http://wiki.koha.org/doku.php?[…]er_meta_record_db |
11:00 |
|
owen |
Heh... best bugzilla cleanup quote so far: "processz3950queue needs luvin" |
11:48 |
|
thd |
johnb: are you present? |