Time |
S |
Nick |
Message |
12:00 |
|
hdl |
2. you input the biblio for the General serial |
12:00 |
|
hdl |
For instance : Official bulletin on the internet |
12:01 |
|
hdl |
Then you go to serials in the catalogue to open a new subscription. |
12:02 |
|
hdl |
Link biblio with supplier provide date of begining, period or number of serials to be received with the subscription. |
12:02 |
|
hdl |
Make the Numbering Formula. |
12:04 |
|
hdl |
And there you are... You can know wich serial you are waiting, which have been recieved.... IN OPAC as well as in Intranet if FullSubscriptionHistory is set. |
12:04 |
|
kados |
interesting |
12:04 |
|
hdl |
Doss that answer your demand ? |
12:04 |
|
kados |
I think so |
12:04 |
|
kados |
so when I search on opac I'll see what issues are avaialble |
12:04 |
|
hdl |
Yes. |
12:05 |
|
kados |
but what if I've got a bunch of serials with each issue in an 852 tag (for holdings) |
12:05 |
|
thd |
hdl: what about individual issue barcoding? |
12:05 |
|
kados |
hdl: is there a way to map those issues (in 852) to the serials in Koha? |
12:05 |
|
kados |
hdl: (or they could be in 952) |
12:06 |
|
hdl |
They are already stored in serial table. |
12:06 |
|
kados |
hdl: ? if I import a new MARC record it's already in the serial table? |
12:07 |
|
kados |
hdl: I'm talking about a library migrating to Koha using MARC files that have one record for each serial with many many 852 fields for the issues |
12:08 |
|
kados |
is there a way to map the issues to the serials table in Koha? |
12:09 |
|
kados |
does that make sense? ;-) |
12:09 |
|
hdl |
kados : No. I said that serial table was used for subscription management....!!!! |
12:09 |
|
hdl |
kados : But I did NOT say it created a MARC record. |
12:09 |
|
hdl |
kados : but what if I've got a bunch of serials with each issue in an 852 tag (for holdings) I didnot understand. |
12:09 |
|
hdl |
kados : is there a way to map the issues to the serials table in Koha? I think there should be but need some work. |
12:09 |
|
kados |
ahh |
12:10 |
|
thd |
hdl: If I know the library to which kados is referring, then the 852 fields for each issue have the volume and issue number in each 852 field. |
12:10 |
|
kados |
in your opinion is it good to store issues in 852 fields ? or should each issue have it's own marc record |
12:10 |
|
kados |
thd: right |
12:10 |
|
hdl |
I think that it all depends on the library. |
12:11 |
|
kados |
thd: what do you think? |
12:11 |
|
thd |
kados: Why would each issue have its own MARC record? |
12:11 |
|
kados |
thd: isn't it risky to store issues in 852 fields since there's an upper limmit to how large a MARC record can be? |
12:11 |
|
hdl |
If serial searching should lists numbers, you should make a new MARC record for each number. |
12:12 |
|
kados |
thd: how is this normally handled? |
12:12 |
|
hdl |
If it is more a question to know if there is a subscription to this serial... Then you should map to 852 field. |
12:12 |
|
kados |
thd: (within marc) |
12:12 |
|
thd |
kados: the problem is I think how the issues are stored, |
12:13 |
|
thd |
kados: Look at the poorly written yet useful migration document that I wrote. |
12:14 |
|
thd |
kados: It is on kohadocs.org now in section 2.2 for migration. |
12:15 |
|
kados |
thd: yep ... reading the serials holdings section now |
12:17 |
|
kados |
thd: how do you suggest addressing the issue? ;-) |
12:17 |
|
thd |
kados: The problem with the library to which you are referring is that they have put each issue in a separate 852 field rather than represent the combined information in fewer repeating fields. |
12:18 |
|
kados |
thd: right ... so they're doing it wrong then ;-) |
12:19 |
|
thd |
kados: Well they are using the limitations of their existing software perhaps. |
12:19 |
|
kados |
thd: I still think you'd run into a size limitation with the MARC record |
12:19 |
|
kados |
thd: for some serials |
12:20 |
|
kados |
thd: I'm wondering whether most libraries use MARC for serials or whether they do them using some kind of internal serials module like Koha's |
12:20 |
|
thd |
kados: In a fully MARC compliant database, multiple records can be linked to one another to avoid to much information in one record. |
12:20 |
|
hdl |
kados thd : In UNIMARC you can have a link down i.e. from serial to issues OR/AND a link UP from issues onto serial father. |
12:21 |
|
hdl |
Ther should be the same in US-MARC. |
12:21 |
|
thd |
hdl: MARC 21 has the same provision for linking records to parent , etc. records. |
12:22 |
|
kados |
does Koha support this? |
12:22 |
|
hdl |
Yes... |
12:22 |
|
kados |
I wasn't aware if it ... how does it work? |
12:22 |
|
hdl |
I did unimarc_field_4XX plugin. |
12:22 |
|
hdl |
Well.... |
12:23 |
|
hdl |
It just get the biblionumber of the father or son into the required field. |
12:24 |
|
thd |
kados: I had suggested this to Owen a month ago for what he was interested in doing for simply creating a temporary serial record for an issue that would be discarded after a brief period. |
12:24 |
|
hdl |
But then, searching through or getting FULL inheritance. That is not implemented. |
12:25 |
|
thd |
kados, hdl : There is coding to be done for this problem, as Owen replied to me. |
12:25 |
|
hdl |
which one ? |
12:26 |
|
thd |
hdl: I mean implementing record linking more completely in Koha. |
12:29 |
|
hdl |
Yes. But would need a detailed spec to stick closer to it and to know what is really needed. |
12:29 |
|
thd |
On many levels there are problems. In the OPAC, if every issue has its own record and is not discarded the borrower does not want to browse through hundreds of issues to find the right one. |
12:35 |
|
owen |
http://koha.org/cgi-bin/logs.p[…]g+2020+20May+2020 |
12:39 |
|
owen |
Koha's serials management is pretty much disconnected from circulation. |
12:39 |
|
kados |
from MARC too unless I'm mistaken |
12:41 |
|
hdl |
No Kados, youre not, unfortunately. |
12:43 |
|
thd |
kados: It is connected to MARC if you create a barcode for each item and you have your barcodes linked to a MARC subfield is it not? |
12:43 |
|
owen |
Serials management doesn't have any way of connecting individual issues to individual items |
12:44 |
|
kados |
hdl: have a good evening |
12:44 |
|
hdl |
thx. |
12:44 |
|
kados |
hdl: we can talk about this again ;-) |
12:45 |
|
hdl |
When you want. It is interesting. |
12:46 |
|
kados |
owen: what if there was a 'serials framework' |
12:46 |
|
thd |
good night hdl |
12:46 |
|
kados |
owen: does that make sense? |
12:46 |
|
owen |
Heh...that's what prompted my discussion with thd about serials. |
12:46 |
|
kados |
ahh :-) |
12:47 |
|
owen |
I was telling him how libraries might like to have a framework especially for serials. |
12:47 |
|
thd |
I am about to paste it. |
12:47 |
|
kados |
it would also be nice if you could 'flag' a record and have it use a specific framework |
12:47 |
|
kados |
when you're importing |
12:47 |
|
thd |
relatively short. |
12:47 |
|
thd |
16/07/05 03:49:11+-5<thd:#koha>owen: the ideal way to address the disposable serials issue is with a 772 linking field |
12:47 |
|
thd |
16/07/05 03:50:19+-5<thd:#koha>I imagine there is some functionality one could borrow from the UNIMARC side |
12:47 |
|
thd |
16/07/05 03:51:19+-5<owen:#koha>Can you elaborate? |
12:48 |
|
thd |
16/07/05 03:52:13+-5<thd:#koha>A well designed 77X plugin should copy required information from a linked record and then you would only need to modify a few subfields in 3 or 4 fields |
12:48 |
|
thd |
16/07/05 03:54:17+-5<thd:#koha>So you would get all the bibliographic information that was the same for every issue of your serial copied from the other record and you would add the issue and holdings data. |
12:48 |
|
thd |
16/07/05 03:55:29+-5<thd:#koha>This would be a MARC21 compliant solution :) |
12:48 |
|
thd |
16/07/05 04:01:38+-5<thd:#koha>The linking fields are 76X - 79X. I am trying to fetch an example for you but my system is thrashing a bit. |
12:48 |
|
thd |
16/07/05 04:02:10+-5* indradg:#koha joined |
12:48 |
|
thd |
16/07/05 04:03:43+-5<thd:#koha>245 04$aThe Post boy.$nNumb. 2436, from Thursday December 21 to Saturday December 23, 1710. |
12:48 |
|
thd |
16/07/05 04:03:43+-5<thd:#koha>772 1#$7unas$aPost boy (London, England)$w(OCoLC)1234567 |
12:48 |
|
thd |
16/07/05 04:04:05+-5<thd:#koha>http://www.itsmarc.com/crs/Bib1390.htm |
12:48 |
|
thd |
16/07/05 04:06:04+-5<thd:#koha>owen: In the example above, 772 $w is the control number for the linked record on your system and 245 $n has been added for the issue. |
12:48 |
|
thd |
16/07/05 04:08:12+-5<owen:#koha>Sounds great in theory :) |
12:48 |
|
thd |
16/07/05 04:08:23+-5<thd:#koha>owen: 245 $a , 772 $a, and much else could have been copied by a plugin from the linked record |
12:48 |
|
thd |
16/07/05 04:12:31+-5<thd:#koha>owen: I know there may be a more efficient way to solve your immediate problem with less effort but this standards compliant method would be reusable even for permanent records and other institutions where people go crazy over MARC standards :) |
12:48 |
|
thd |
16/07/05 04:23:48+-5<owen:#koha>Something to be taken up with the developers... |
12:50 |
|
owen |
I don't see why the serials management process could integrate an item-creation routine each time an issue was received. At that point you could enter barcode information, etc. |
12:50 |
|
owen |
could->couldn't |
12:51 |
|
thd |
owen: exactly |
12:52 |
|
owen |
kados: what other kinds of things is this library looking for in serials? |
12:53 |
|
thd |
I assume that the procedure that hdl was referring to does not allow an individual barcode per issue. |
12:53 |
|
kados |
owen: I really have no idea |
12:53 |
|
kados |
owen: it's the law library that contacted me recently |
12:53 |
|
kados |
owen: that got me thinking about serials |
12:53 |
|
kados |
owen: because they said they'd have to look at the serials support closely |
12:53 |
|
kados |
owen: I said I'd look into it and get back to them |
12:54 |
|
kados |
owen: what I'll probably do is get a list of things they want to do |
12:54 |
|
kados |
owen: and report back to koha-devel |
12:54 |
|
thd |
kados: So you were not referring to the church library? |
12:54 |
|
owen |
You and I probably have similarly-lacking knowledge of serials management, since it's not something we do at NPL |
12:54 |
|
kados |
thd: no |
12:54 |
|
kados |
owen: exactly |
12:54 |
|
kados |
owen: all it takes is one client though ;-) |
12:55 |
|
kados |
owen: functional abstraction rocks ;-) |
12:55 |
|
kados |
owen: except when it doesnt ;-) |
12:55 |
|
thd |
kados: what is their existing software? |
12:55 |
|
kados |
thd: not sure yet |
12:56 |
|
kados |
thd: but it's one of the cheaper ILS solutions ... not one of the big guys |
12:56 |
|
kados |
thd: probably follett or Sagebrush |
12:57 |
|
kados |
nope ... dont' have it recorded |
12:59 |
|
thd |
kados: is the client near enough to you so that you can see their existing system in action? |
13:00 |
|
kados |
thd: yea ... |
13:00 |
|
kados |
thd: same state even ;-) |
13:01 |
|
thd |
kados: well, it would be good to have some real information to see how it works with what they are already using. |
13:01 |
|
kados |
thd: yep |
13:01 |
|
kados |
thd: I'm planning to do just that |
13:03 |
|
thd |
kados: about your record matching problem over Z39.50 |
13:03 |
|
kados |
thd: yes? |
13:04 |
|
thd |
kados: you can see the same problem in a smaller scale in Koha. |
13:04 |
|
thd |
kados: NPL has few enough records that it should not appear much. |
13:06 |
|
thd |
kados: In Koha all the various fiends that might contain author information can be linked together for an author search. |
13:07 |
|
thd |
kados: in MARC 21 that is 100, 245 $c 700, etc. |
13:07 |
|
thd |
kados: there are many more I just simplify. |
13:08 |
|
kados |
right |
13:08 |
|
thd |
kados: so if you search for John Smith, |
13:09 |
|
thd |
you may find a record that has John Knox and James Smith. |
13:09 |
|
kados |
right |
13:10 |
|
thd |
kados: The larger the database is the more likely such mismatches are to be a problem. |
13:12 |
|
thd |
kados: This goes to much of what Karen was trying to tell me about how the number and size of indexes increase when you do things right. |
13:14 |
|
thd |
kados: The match I gave above should never happen because the query was being run against too many indexes combined as if they were one. |
13:15 |
|
thd |
kados: combining the indexes in that way can improve search performance only at the price of lost accuracy. |
13:17 |
|
thd |
kados: Ideally, in a conventional system, you want to search 100 separately from 700 and only report a match if any one of them has a match. |
13:17 |
|
thd |
kados: not if there is a match collectively on all the terms indexed. |
13:19 |
|
thd |
kados: 245 $c , if included is a problem because the statement of responsibility there puts all the authors in one subfield. |
13:20 |
|
thd |
kados: sorry, temporarily distracted by a phone call one moment. |
13:23 |
|
thd |
kados: UNIMARC provides for subfield separation for last names in the author fields. |
13:25 |
|
thd |
kados: forcing the user to specify author queries exactly to match one combined index can be problematic. |
13:27 |
|
thd |
kados: so for the OPAC, conventional systems check each relevant index separately consuming many CPU cycles. |
13:27 |
|
thd |
kados: The exact problem you were having with Z39.50 is a little different. |
13:30 |
|
thd |
kados: Whether the system saves some cycles on Z39.50 searches at LC and uses a combined index like Koha I have not tested for yet. |
13:32 |
|
thd |
kados: however, I have found both author and title matches to personal name subjects, 600, and formatted tables of contents, 505, for authors. |
13:33 |
|
thd |
kados: titles words also match formatted tables of contents, 505, for LC Z39.50. |
13:34 |
|
thd |
kados: to get the most relevant matches over z39.50 you have to adjust the query and the search attribute syntax. |
13:35 |
|
kados |
thd: what do you suggest? |
13:36 |
|
thd |
kados: most importantly for your purpose n identifying exactly matching records tiles must be a phrase search. |
13:37 |
|
thd |
Yaz supports two useful types of search attribute in addition to all the others. |
13:38 |
|
thd |
I have not looked to see how it is specified in Net::Z39.50 |
13:42 |
|
thd |
kados: One useful way of supplying a phrase search in Yaz is s=pw . |
13:43 |
|
thd |
kados: This gives you a phrase search for multiple tokens in the query. |
13:45 |
|
thd |
kados: I do not remember the numeric equivalent but I seemed to have trouble with numeric equivalents sometimes. |
13:46 |
|
thd |
kados: you need to search on multiple elements as well. |
13:48 |
|
thd |
kados: author-title as a single search covers too many fields and cannot work as a phrase search with both author and title if you use both, unless my information misses something about author-title searches. |
13:51 |
|
thd |
kados: you need u=4 or 1=4 for title so for your title phrase from 245$a omitting the leading article you would have u=4 s=pw as attributes. |
13:54 |
|
thd |
kados: In the same query you should have the author's last name form 100 $a or 700 $a etc. if 100 etc. is missing to use with u =1003 attributes. |
13:57 |
|
thd |
kados: that might need some refinement if corporate authorship is an issue. I do not remember at this moment if u=1003 is all authors or only personal names. |
13:58 |
|
thd |
kados: Then if there are multiple manifestations of the same work or if that is not accurate enough otherwise you need to go next to publisher. |
14:00 |
|
thd |
kados: Before I forget to mention it, the reason I advise against using first name for authors is you do not know whether you have authority controlled names. |
14:02 |
|
kados |
thd: makes sense |
14:02 |
|
thd |
kados: you might try experimenting with last name and first initial as a string if that could be made to work with LC over Z39.50. that would be a string search with a numbered search attribute not a phrase search. |
14:03 |
|
thd |
kados: So publishers are very tricky because there is no authority control for publisher names. |
14:05 |
|
thd |
kados: you want to identify the Wiley in John Wiley and Sons and search only for that if you can. |
14:08 |
|
thd |
kados: certainly for publisher you can eliminate common words like 'and', 'publishers', 'university', 'press', etc. |
14:16 |
|
thd |
kados: so with 260 $b extract the most important words to submit with u=1018 s=al attributes. s=al is for all words. |
14:17 |
|
thd |
sorry, that my response time is slowing my system has been thrashing for a while today due to a memory leak in Flash ads in Firefox. |
14:18 |
|
kados |
:-) |
14:20 |
|
thd |
kados: Dates or edition information might be used but how they are recorded in records is too variable to be relied upon over Z39.50. |
14:21 |
|
thd |
260 $c may have only a reprint date in your record or the Z39.50 target record etc. |
14:23 |
|
thd |
kados: For the past two weeks I have been intensely researching how to identify the true duplicates that may be left in a result set of potential candidates. |
14:24 |
|
thd |
kados: I have seen the dirtiest microfilm for an academic journal that I have ever encountered. |
14:24 |
|
thd |
kados: The good detailed published research seems to be quite old. |
14:25 |
|
thd |
kados: All my interlibrary loan requests are still outstanding. |
14:27 |
|
thd |
kados: Identifying the true duplicates in a candidate result set gets very tricky quickly because of the lack of absolute uniformity of cataloguing practice between cataloguers and over time and following AACR2 etc. |
14:29 |
|
thd |
kados: In the UNIMARC world they at least have the luxury of authority control over publisher names. |
14:34 |
|
thd |
kados: If you can identify duplicates reliably over z39.50 and after you could enhance records automatically by merging the fields in records obtained targets that had more complete records or merely additional encoded fields. |
14:36 |
|
thd |
kados: MELVYL used to do this and it is the major focus of section 3 of my yet to be sent email to you. |
14:37 |
|
kados |
thd: you ever going to send it ? ;-) |
14:38 |
|
thd |
kados: two weeks ago I had it very close to finished but I could not stay awake long enough to proof read it. |
14:38 |
|
thd |
kados: I mean I could not see straight to proof read it after I had finished writing :) |
14:39 |
|
thd |
kados: I want to add the appropriate references to the published research as well. |
14:41 |
|
thd |
kados: Every catalogue should have the most complete records possible but even OCLC master records do not achieve this for many cases. |
14:43 |
|
kados |
thd: that's because they're written by librarians ;-) |
14:44 |
|
kados |
thd: s/written/contributed/ |
14:44 |
|
thd |
kados: I have not addressed the old MELVYL merged record issue with Karen we mostly talked about her ideas about making systems more efficient so that you could actually start to do FRBR efficiently instead of spending too many CPU cycles for a less than effective result. |
14:45 |
|
thd |
kados: OCLC pays libraries for both contributing original and enhancing existing records. |
14:45 |
|
kados |
thd: but all records come from librarians |
14:46 |
|
thd |
kados: The payments are obviously not high enough. |
14:47 |
|
thd |
kados: you have added routines to add information to the OPAC contained in records that does not come from libraries. |
14:48 |
|
kados |
thd: right |
14:48 |
|
thd |
kados: Amazon gets information from distributors and publishers, as well as creating their own. |
14:48 |
|
kados |
thd: does that mean I should get paid the same as a cataloger who manually updated those records? ;-) |
14:49 |
|
kados |
thd: pipe dreaming ;-) |
14:49 |
|
thd |
more kados, more; |
14:49 |
|
kados |
thd: hehe |
14:49 |
|
thd |
;-) |
14:49 |
|
kados |
thd: yea ... because I can do it in an hour instead of a year ;-) |
14:50 |
|
thd |
kados: there is a difference though with what you did and what a cataloguer does in enhancing records. |
14:53 |
|
thd |
kados: You need to comply with Amazon's data licensing terms against permanent long term storage of their data. But a cataloguer enters the enhanced information into a record so that it is searhable. What you have done is not searchable yet. |
14:56 |
|
osmoze |
thd, have you receved my mail ? |
14:57 |
|
thd |
kados: Information from Amazon should go temporararily into 5XX fields, or whatever fields are appropriate so that it can be searched. |
14:59 |
|
thd |
osmoze: yes thank you I was too tired last night to examine your records but what is your relation to Ecole Normale Supérieure de Cachan ? |
14:59 |
|
osmoze |
nothing ^^ |
15:00 |
|
osmoze |
i ve just searching any z3950 in france |
15:00 |
|
osmoze |
for me |
15:00 |
|
osmoze |
i m a little public library, not school library |
15:00 |
|
thd |
osmoze: their records do have the ISO 5426 character set. |
15:01 |
|
osmoze |
yes |
15:01 |
|
osmoze |
that what you are searching for no ? |
15:01 |
|
thd |
osmoze: In their records accented characters are multibyte. |
15:02 |
|
thd |
osmoze: yes that is exactly what I was expecting to find. |
15:05 |
|
thd |
osmoze: Their records have standard UNIMARC character set encoding using ISO 5426 that Koha is not designed to use yet. |
15:06 |
|
osmoze |
hum...and me ? |
15:07 |
|
osmoze |
i don't understand why you say that koha is not designed to use |
15:07 |
|
osmoze |
there is only for characters |
15:08 |
|
osmoze |
so, the fonction of koha not differ |
15:08 |
|
thd |
osmoze: The confusing thing is that BNF would seem to be sending records over Z39.50 in single byte 8859-15 while the encoding given in 100 specifies double byte ISO 5426. |
15:09 |
|
osmoze |
hum...oki :) i understand now :) |
15:13 |
|
thd |
osmove: Koha would need some special routines to change the user query in the OPAC from single byte 8859 to multibyte ISO 5426 for searches on records storing data in ISO 5426. |
15:14 |
|
thd |
osmoze: The opposite would need to be done for displaying the record. |
15:16 |
|
thd |
osmoze: The record editing module would need to have a special ISO 5426 font that may not be possible without extending the functionality of the web browser. |
15:17 |
|
thd |
kados: do you know anything abut adding support for non-standard fonts to a web browser? |
15:17 |
|
osmoze |
hum...oki, but the difference between iso 8859-1(or 15) and iso 5426 ? |
15:18 |
|
owen |
There is some available technology for using 'downloadable' fonts in a web page, but it is limited and not cross-browser friendly |
15:19 |
|
owen |
Why would you need a non-standard font? |
15:20 |
|
thd |
osmoze: ISO 5426 just represents the accents. |
15:20 |
|
osmoze |
yes but, in 8859-15 accents are present |
15:23 |
|
thd |
own: Koha cannot support existing MARC standards without supporting MARC-8 for MARC 21 and ISO 5426 for UNIMARC. There may be some other character set issues as well but those are the main sets used for Latin characters. |
15:24 |
|
thd |
own: NPL has been cheating which is fine for NPL but that will not do at most libraries with a more diverse collection that is more likely to have records with accents. |
15:24 |
|
owen |
It's not a question of the font in the browser, it's a question of the data stored in the database and the data coming in from the user's keyboard. |
15:25 |
|
owen |
We can help by giving the proper encoding to the web pages and to the database structure |
15:25 |
|
owen |
Does UTF-8 not cover the proper character set? |
15:26 |
|
thd |
owen: A problem is that there is no standard web page encoding for these library character sets. |
15:26 |
|
owen |
What in particular makes it a 'library character set'? |
15:27 |
|
thd |
owen: Well actually converting to 8859 or UTF-8 for display would be necessary. |
15:28 |
|
thd |
owen: and then converting back again. |
15:28 |
|
owen |
What makes this character set necessary for libraries? |
15:29 |
|
thd |
owen: for storing an edited records information in a standards compliant way. |
15:29 |
|
thd |
owen, osmoze: these library character sets were designed before unicode. |
15:29 |
|
owen |
Yes, but what makes the character set suitable for libraries? |
15:30 |
|
owen |
What characters are included that aren't available in other character sets? |
15:31 |
|
thd |
owen: These character sets are difficult to work with and bad in the present time compared to the others. |
15:32 |
|
thd |
owen: Unfortunately, these are the ones supported by library standards, and contained in properly encoded library records. |
15:33 |
|
owen |
Saying the character set is different because it's the one required by library standards doesn't really provide any useful information about the character sets themselves. |
15:35 |
|
thd |
owen: you can use other encodings, but that requires converting the record and changing the coded character set encoding designation in MARC 21 as required for MARC 21 and UNIMARC 100 $a/26-29 etc. |
15:36 |
|
owen |
thd: what makes the library-specific character encoding different from, say, UTF-8? |
15:36 |
|
thd |
owen: sorry, I cannot type fast enough and my system is thrashing. |
15:39 |
|
thd |
owen, osmoze: UTF-8 is well supported outside library records and would be easy to use but it has a different encoding from library records currently. |
15:40 |
|
genji |
hey kados, you still in? |
15:42 |
|
thd |
owen, osmoze: If you encode your records in a superior such as with full unicode for example you will not be able to support exchange of your records with other libraries because their systems are mostly not compliant with full unicode and full unicode is not included in library standards. |
15:43 |
|
owen |
Is there any reason why Koha couldn't just have an export option that allowed the proper encoding? Is there any reason to actually *store* using this required encoding? What advantage does the encoding offer? |
15:43 |
|
osmoze |
oki oki ^^ |
15:44 |
|
osmoze |
but, export record is often in same encoding no ? i m thinking that my record don't interresting you |
15:44 |
|
osmoze |
no ? |
15:44 |
|
thd |
owen: that would be the best approach. The exta advantage is only for record exchange and not having to convert for exchange purposes. |
15:45 |
|
owen |
Seems like it's hardly an advantage to 'not have to convert for exchange' if you have to figure out how to re-write Koha to handle an obscure character encoding. |
15:46 |
|
thd |
osmoze: Well I am expecting that your records are interesting simply from the character set encoding being ISO 8859-15 while the 100 $a/28-29 indicates that the encoding should be ISO 5426. |
15:48 |
|
thd |
owen: you are perfectly right about which would be more work and more efficient. |
15:49 |
|
thd |
I mean and which other would be more efficient. |
15:52 |
|
thd |
owen: so koha should move to UTF-8 or full unicode and then convert for import and export. Either way is still a lot of work to do once. |
15:53 |
|
owen |
I would be surprised if there were many ILSes out there that couldn't import records in non-standard encodings |
15:54 |
|
thd |
owen: You cannot expect that the OPAC user will have UTF-8 or unicode outside the library so conversion would be needed for display and queries but at least standard tools are available for that purpose. |
15:55 |
|
owen |
So you're saying that browser support for UTF-8 shouldn't be expected? |
15:57 |
|
thd |
owen: I think for the browser to support UTF-8 the user has to install UTF-8 font support. I am not sure that the millions of Windows 98 and less users have the right font support available by default. |
15:58 |
|
owen |
I'm guessing that the worst that will happen is that they'll get some blank characters where a special character should be |
15:59 |
|
owen |
http://www.cafepress.com/nucleartacos.26746952 |
15:59 |
|
thd |
owen: The newer the OS the more likely the font support is there without special user effort. The older the OS the more likely a special effort is needed to want CJK support for example. |
16:00 |
|
thd |
owen: blanks, question marks or strange boxes. |
16:00 |
|
owen |
CJK support? |
16:07 |
|
thd |
owen: Chinese, Japanese, and Korean. If you have Windows 95 or Windows 98, and you install CJK support or some other support that requires at least UTF-8 then the system supports UTF-8. Otherwise the system does not support UTF-8 at all. UTF-8 support could of course be added for only Latin characters but even then requires a special update for most European language users. |
16:08 |
|
thd |
owen: European languages do not need UTF-8 or unicode but CJK requires at least UTF-8. |
16:15 |
|
thd |
owen: everything would work much better with full unicode as the standard for library records but the ALA rejected that for lack of readiness and fear of change a few months ago. |
16:19 |
|
thd |
owen: I am given to understand that major ILS vendors want this but the librarians do not want to carry the one time expense of record conversion. |
16:20 |
|
thd |
s/librarians/library administrators |
16:21 |
|
thd |
owen: The cataloguers want this because it would make their task a little easier. |
16:22 |
|
osmoze |
bye |
16:23 |
|
thd |
good night osmozZz |
16:23 |
|
genji |
aww |
16:23 |
|
osmozZz |
thx :) |
16:24 |
|
genji |
wanted to hash something out with someone. |
16:24 |
|
genji |
owen, you available? |
16:24 |
|
owen |
Yes |
16:25 |
|
thd-away |
genji will you be up in 6 hours? |
16:25 |
|
genji |
yup |
16:25 |
|
genji |
will owen? |
16:26 |
|
thd-away |
owen is up now though, but I am gone |
16:27 |
|
genji |
right., ill talk to owen.. if i need further help, ill ask in 6 hours. |
16:27 |
|
genji |
anyway, owen... you know how each borrower has flags right? |
16:27 |
|
owen |
Yes |
16:28 |
|
genji |
Right. Now, Kados idea was to expand that idea, and give borrower categories permissions, and have them inherit to the users in their group. |
16:30 |
|
genji |
What you think of that? |
16:31 |
|
owen |
I'm not sure what 'inherit to the users in their group' would mean |
16:34 |
|
owen |
You just mean that each person in a particular category would have the permissions that had been set for that category? |
16:34 |
|
genji |
okay..... If i create a borrower in group "Admin" the admin group will give permissions to the borrower of its category. |
16:35 |
|
owen |
I think that makes a lot of sense. |
16:35 |
|
owen |
It would also set things up so that libraries could think about setting other staff-specific parameters like loan length, etc. |
16:37 |
|
genji |
then, how do you represent permission overrides, in the modify user flags page? in windows, a unset inheritance would be represented by a grey check in a checkbox. But, in web browsers, we can't do that. |
16:38 |
|
owen |
Ah, so you're thinking it should be possible to override the permissions set by a borrower category? |
16:41 |
|
genji |
if the person inside modify flags desires it, yes. but how to represent inheritance? |
16:42 |
|
owen |
Possibly a second column indicating which flags to override? (Of course that assumes you only override something to *on*) |
16:43 |
|
genji |
hmm. |
16:50 |
|
genji |
What if you want to override something to off? |
16:52 |
|
genji |
hmm. this is difficult. |
16:54 |
|
genji |
how does one represent the fact that a flag was overridden, on previous edits? |
16:56 |
|
owen |
What about two columns, one showing group standard flags and one current user flags? |
16:57 |
|
genji |
hmm.. and the group standard flags are solid state, they're just graphical representation? |
16:58 |
|
owen |
Yeah |
16:58 |
|
genji |
so the only time the flags are inherited, is in new borrower mode? |
16:59 |
|
owen |
I'm assuming the flags would be set according to which borrower category you chose when you added the member. And maybe you couldn't even create a borrower with higher privileges than yourself |
17:00 |
|
genji |
k. laters then. |
17:00 |
|
owen |
I'll check the logs tomorrow |
03:12 |
|
hdl |
kados, thd, owen: what you described at 19:50(French time) yesterday and 4:23 (N-Ztime) on the 16/07 is actually done with unimarc_field_4XX plugin. |
03:12 |
|
hdl |
kados, thd, owen :Thus, One can connect 7XX plugin on the basis of this plugin... (Maybe some changes should be done on the subfield labels) |
03:12 |
|
hdl |
Salut Sylvain. |
03:12 |
|
Sylvain |
salut hdl |
03:30 |
|
osmoze |
bonjour |
03:59 |
|
hdl |
salut osmoze |
04:19 |
|
hdl |
thd : I think that owen is true saying that Koha should now head toward UTF-8 support and then use conversions from and to weird charset only on import/export. |
04:19 |
|
hdl |
That is required with CJK but also with arabic languages... and seemms not so away from Koha. |
05:11 |
|
hdl |
hi indradg |
09:43 |
|
thd |
hdl hello |
09:43 |
|
hdl |
thd hello ! |
09:43 |
|
hdl |
nice nap ? |
09:43 |
|
thd |
yes |
09:45 |
|
thd |
hdl: part of what I had said about character set issues yesterday was mistaken. I need to correct that. |
09:46 |
|
hdl |
On which theme : serials, Z3950 ? |
09:46 |
|
hdl |
both ? |
09:48 |
|
thd |
hdl: I had misinterpreted some documents about the MARC-8 set for MARC 21 where it referred to that set as default and made an improper assumption. |
09:49 |
|
thd |
hdl: MARC 21 has allowed UTF-8 since 2000 but not full unicode since 2000 provided leader, 000, is set to a. |
09:50 |
|
thd |
hdl the existing records in the world are in MARC-8 and require conversion. |
09:52 |
|
thd |
hdl: the reason that full unicode is not allowed yet has been given as ILS system unreadiness. But the vendors, I have been told are waiting on the standard to require it before implementing full unicode. |
09:52 |
|
hdl |
hi owen. |
09:53 |
|
owen |
Hi |
09:53 |
|
hdl |
Why note having a full Unicode Koha before ? |
09:54 |
|
hdl |
It would be More than great on a marketing point of view when ILS vendors comes to UTF-8. |
09:54 |
|
thd |
hdl: Koha should have full unicode because it is in the UNIMARC standard. |
09:55 |
|
thd |
hdl: If 100 $a/25-27 is set to 50 then the record is full unicode. |
09:56 |
|
thd |
hdl: Of which, UTF-8 is a subset. |
09:57 |
|
hdl |
So : taht leads to choose UTF-8 with transcoding on US-MARC import and export or even only in import ? |
09:58 |
|
thd |
hdl, owen: MARC::Charset provides conversion from MARC-8, thank you again Ed Summers. |
09:59 |
|
hdl |
AHA :D |
09:59 |
|
hdl |
smileglasses |
10:00 |
|
thd |
hdl, owen: There is also a closely related module for converting back into MARC-8. |
10:05 |
|
thd |
hdl, owen: The existing base of UNIMARC records in ISO 5426 can be converted with Encode::MAB2. |
10:05 |
|
hdl |
MAB2 : what is this format ? |
10:06 |
|
thd |
hello osmoze and owen I hope you have read my correction from yesterday above. |
10:07 |
|
thd |
hdl: MAB2 is the German MARC and is ISO 2709 compliant unlike MAB, which had not been. |
10:10 |
|
thd |
hdl, owen, osmoze: changing character sets in the record is supposed to require changing the character set encoding designations in the record. |
10:11 |
|
thd |
hdl, owen, osmoze: Otherwise another system cannot be expected to know how to read exchanged records. |
10:12 |
|
hdl |
thd : this can be managed when exchanging datas. |
10:14 |
|
thd |
hdl: BNF seems to be doing something that may be useful but is certainly not standards compliant in providing UNIMARC records in the non-UNIMARC standard ISO 8859-15 but leaving the record encoding set to ISO 5426. |
10:15 |
|
hdl |
But most of the time, ppl inside the library we serve, either DON'T inform these fields would it be laziness or ignorance or DONOT inform it right. |
10:17 |
|
hdl |
So it should be automatically generated. |
10:17 |
|
hdl |
Maybe. |
10:17 |
|
thd |
hdl: many fields could be set automatically to help the lazy. Or provide a screen that the user cannot move past without setting some required information to prompt the lazy :) |
10:21 |
|
thd |
hdl: The character set could certainly be guessed at by the system from what was in the record for setting automatically. If the user wanted to use another one than the default the user would need to change something. |
10:23 |
|
thd |
hdl, owen, osmoze: conversion to 8859 for the OPAC would still be needed for the user systems outside the library that do not have UNIMARC support installed. |
10:26 |
|
thd |
hdl, owen, osmoze: I suspect that no one wants to refuse them service when their system shows C e?z a n n e instead of Cézanne. |
10:27 |
|
owen |
Not exactly refusal of service, but of course that would be ideal |
10:28 |
|
thd |
or whatever actually happens, the link to I ? Unicode was :-) |
10:32 |
|
thd |
hdl: so the plugin that you referred to, does that create a new item record based on the parent record? |
10:33 |
|
hdl |
thd : I agree with you : Conversion to ISO-8859 for OPAC. |
10:34 |
|
hdl |
thd :No. it is a plugin to help biblio input. |
10:34 |
|
hdl |
unimarc_field_4XX are UNIMARC links with other biblios. |
10:35 |
|
thd |
hdl: what does it do to help biblio input and what is the name of the plugin? |
10:36 |
|
hdl |
So you can link this plugin to a 7XX field and click on the "..." |
10:36 |
|
hdl |
You are promtpted do choose a biblio then it fills all the field subfields with correct values and adds a $9 with biblionumber. |
10:37 |
|
hdl |
find it in : value_builder/unimarc_field_4XX.pl |
10:40 |
|
thd |
hdl: Does it create a new child or sibling record based on the parent record, as I had been discussing with owen a month ago? I pasted that previous text into yesterday's discussion. |
10:43 |
|
hdl |
It creates the field and fills in the subfield. |
10:43 |
|
hdl |
It creates No new MARC::Record. |
10:45 |
|
thd |
hdl: Owen had been looking for a simple way to add a serial record for an issue of a serial, a month ago, where the cataloguer would not need to retype everything, especially as that issue would be discarded after a brief time and record would then be deleted. |
10:47 |
|
thd |
hdl: So value_builder/unimarc_field_4XX.pl merely creates the linking field with the appropriate information to the linked record. |
10:50 |
|
hdl |
thd : yes : is this not enough ? |
10:50 |
|
hdl |
Is this not what you expected ? |
10:54 |
|
thd |
hdl: There is still typing the title, subjects, etc. everything that goes into creating a record if all you want to do is merely copy a record and link back to it as a parent record. Then issue information for a serial and a separate barcode might be added to the child record and nothing more need be typed. |
10:58 |
|
thd |
hdl: With reference to the issue that kados had brought up yesterday, where the library is retaining all the issues and not discarding them. Your serials management procedure does not allow for separate barcodes for each issue does it? |
11:00 |
|
hdl |
Wait Wait Wait. |
11:01 |
|
hdl |
"My" 4XX link management was not meant only for serials. |
11:01 |
|
hdl |
It was for all 4XX management. |
11:01 |
|
thd |
hdl: I realise that. |
11:02 |
|
hdl |
And as such, you cannot expect it to do the job. |
11:02 |
|
thd |
:) |
11:03 |
|
hdl |
I said yesterday that serial issues management should be workde upon depending on what the library wants. |
11:04 |
|
hdl |
Wether serial or issues are important is a criterium to decide how to manage issues. |
11:04 |
|
thd |
hdl: So your existing serials management procedure would not provide for a separate barcodes for each issue? |
11:05 |
|
hdl |
ATM : It just create an entry in serial tables. |
11:05 |
|
hdl |
It is not MARC compliant. |
11:05 |
|
hdl |
And do not link to an item. |
11:06 |
|
hdl |
Most of the time, in library, you cannot issue an issue.... |
11:06 |
|
hdl |
funny : issue an issue :)))) |
11:06 |
|
owen |
It depends on the library, obviously: we circulate individual issues all the time. |
11:08 |
|
thd |
hdl: That level of MARC compliance requires careful scrutiny just to apprehend the documentation. |
11:10 |
|
hdl |
owen : i know that some library consider issues as a book and circulate it. |
11:10 |
|
hdl |
But you have then to fill in a biblio for each issue. |
11:11 |
|
hdl |
Indeed, quite often, the persons who contributed to the issue change. |
11:11 |
|
hdl |
Subjects change to. |
11:13 |
|
hdl |
So it would be difficult to make it simple or automatic IMHO |
11:13 |
|
owen |
But with a garden-variety publication like National Geographic, for instance, the information stays the same each month. |
11:13 |
|
hdl |
Sure |
11:14 |
|
hdl |
But one must be aware of the worst case. |
11:14 |
|
hdl |
which comes quite oftne, when going deeper and deeper. |
11:16 |
|
hdl |
Maybe a quick biblio could be done, which the librarian could get and improve when he has time. |
11:16 |
|
hdl |
Could be a workaround. |
11:17 |
|
hdl |
But maybe not THAT MARC compliant. |
11:17 |
|
owen |
Why not simply offer the option of adding an item record for each newly received issue? |
11:18 |
|
owen |
Of course since Koha can't reserve by item, you still couldn't place reservations on individual issues... |
11:22 |
|
hdl |
The problem is that IMO, it is not just an item record, but a new biblio each time... |
11:24 |
|
hdl |
If you make a resarch on High-Availability Articles you should get all the issues related to this subject, and not only the name of the serial. |
11:25 |
|
thd |
hdl: Putting the true equivalent of your serial holdings table in UNIMARC would entail the UNIMARC's holding format as a separate record. Although, I imagine that you might spell out the table in 995 $u. |
11:26 |
|
hdl |
Sorry. |
11:28 |
|
thd |
hdl: Could libraries not use UINIMARC 995 $u to designate an issue? |
11:29 |
|
hdl |
Wouldnot it be AWFULL :( |
11:29 |
|
thd |
hdl: a particular issue of a periodical, or a particular bound volume of issues for a period of time in UNIMARC 995 $u. |
11:30 |
|
thd |
hdl: Yes, the UNIMARC holdings format with holdings in a separate record is better for that purpose. |
11:31 |
|
hdl |
When you have been subscribing to a serial for 50 years... Do you htink One MARC record wil contain everything ? |
11:32 |
|
thd |
hdl: Well, the holdings format has a way of compressing the information but if there are 50 years worth of issue based barcodes that may become too much for record size limits. |
11:33 |
|
hdl |
;) |
11:33 |
|
hdl |
And It would not be satisfying. |
11:33 |
|
thd |
hdl: kados had an example of a library with over 300 issues of a periodical each with its own barcode. |
11:34 |
|
hdl |
members donot look for a particuliar issue, but most of the time for a piece of information, for an article... |
11:34 |
|
hdl |
thd : All in ONE MARC record ??? |
11:38 |
|
thd |
hdl: I had thought that the usual case was that a user consulted a serials indexing and abstracting service to find the article citation with volume and issue number and then the catalogue to find the general serial record. |
11:40 |
|
hdl |
thd.. Maybe you are right. Unfortunately, I have never been a "library rat". |
11:40 |
|
thd |
hdl: Of course that process is much faster with OpenURL links to a full text database or the catalogue from within an indexing and abstracting service. |
11:41 |
|
thd |
hdl: I grew up in libraries. Or never grew up in libraries :-) |
11:42 |
|
hdl |
Lucky one :) |
11:45 |
|
thd |
hdl: Most general libraries that I know do not separately barcode each issue for long term retention on an issue basis. Issues are gathered and bound into volumes to reduce the individual items over time. |
11:46 |
|
owen |
I suspect it's going to take a paying customer to further develop serials in any meaningful way. It's too abstract right now. |
11:46 |
|
thd |
hdl: kados has found some other types of serial management practices that approach the MARC record size limits. |
11:51 |
|
hdl |
thd : I saw this practise of "Voluming" issues. |
11:51 |
|
hdl |
Well, folks, sorry... but... It is time. |
11:51 |
|
thd |
hdl, owen: How might the OPAC show the main serial parent record and not show 300 child records as well if the child records were all separate bibliographic records? The separate bibliographic records would exist to not exceed the maximum MARC record size when issues were recorded individual with their own barcode on a permanent basis. |
11:52 |
|
hdl |
It is my sister birthday. |
11:52 |
|
owen |
Wish her a happy birthday from Ohio for me :) |
11:52 |
|
thd |
happy birthday hdl's sister |
11:52 |
|
hdl |
YESS... |
11:52 |
|
hdl |
She'll be very pleased. |
11:53 |
|
hdl |
But Indeed, serials ARE a CRUX we should lean on to understand precisely What is asked. |
11:53 |
|
hdl |
I'll reead the logs. |
11:54 |
|
owen |
thd: have you had experience with serials in other ILSes? |
11:55 |
|
thd |
owen: I have had little experience with other Ila's outside the OPAC and disembodied cataloguing modules. |
11:56 |
|
owen |
That's part of our problem. Neither have I--and part of the reason for that is that vendors like to put a hefty price tag on serials support |
11:56 |
|
thd |
s/Ila's/ILS's/ |