IRC log for #koha, 2006-04-20

All times shown according to UTC.

Time S Nick Message
13:18 thd-away kados: are you around?
13:19 kados thd: yes
14:00 kados owen: noticed your recent commit related to decimal place
14:00 owen Yes?
14:00 kados owen: I'm wondering if we shouldn't have a syspref specifying number of decimal places
14:00 kados and use that instead of a hardcoded value
14:00 kados any thoughts?
14:01 owen I've asked that before, but since no one has ever jumped on it I've been fixing them piecemeal.
14:01 kados heh
14:01 owen I think a syspref is a fine idea.
14:02 kados paul will probably want us to wait until 3.0 for that change
14:03 owen Should that be combined with a local currency preference?
14:03 kados probably yes
14:03 kados I bet there's a free currency package out there
14:03 kados we could integrate at some point
15:32 chris morning
15:40 kados morning chris
15:40 kados chris: we had a nice bugsquash mtg
15:40 chris cool
15:40 kados chris: pierrick's got a whole list of questions for you :-)
15:40 chris righto
19:56 thd kados: are you back later yet?
01:16 mason .
01:55 pierrick chris, are you around?
01:58 pierrick kados?
02:12 paul pierrick: peut être que joshua dort un peu quand même ;-)
02:13 paul pierrick: bonjour
02:13 pierrick paul, bonjour
02:13 pierrick paul, oui, mais ses horaires sont parfois surprenant
02:13 hdl bonjour le monde
02:13 paul exact !
02:13 pierrick bonjour hdl
02:13 paul hello hdl
02:14 paul vive la paperasse à la française !!!
02:14 paul déclarations 2042+ 2035A+ 2035B+ Formation Professionnelle+ TVA+Taxe pro
02:15 paul reste juste à faire un chèque de 8250¤ pour le solde de TVA de 2005...
02:15 paul c'était ma vie ;-)
02:17 paul pierrick: qqn a modifié l'ancien wiki hier( www.koha.org/wiki)
02:18 pierrick paul, à ma connaissance, Joshua n'a pas officialisé le nouveau wiki
02:18 paul ouaip, mais comme on l'utilise...
02:19 paul bon, je viens de modifier la page d'accueil de l'ancien.
02:19 pierrick OK
02:19 pierrick Je pense bientôt faire une proposition de nouvelle page d'accueil pour le nouveau wiki. La page actuelle est trop chargée
02:31 russ hi everyone
02:32 paul hello russ
02:32 paul so... everybody coming to Marseille has it's own laptop.
02:32 paul almost everybody has a laptop with wifi.
02:33 paul we have a room with 15 chairs, should be OK even if we are 16
02:33 russ cool
02:34 russ chris and i are working together on his presntation at the moment
02:50 russ paul you there?
02:50 paul yep
02:50 russ have you got a speaker sorted for the first day?
02:50 paul (although on phone)
02:50 russ for the first talk
02:50 russ no worries
02:52 pierrick hi russ
02:52 russ hi pierrick
02:52 pierrick russ, who manages bugs.koha.org, I mean software upgrades and so on
02:52 pierrick ?
02:52 russ chris
02:52 pierrick (because I would really like an upgrade to Bugzilla 2.20.x)
02:53 russ (he is sitting right next to me at the moment)
02:53 pierrick bugzilla 2.14.2 is 4 years old, has bugs and is not supported anymore
02:53 russ he'll have a look
02:53 pierrick hi chris :-)
02:53 russ :-)
02:54 pierrick OK, if he wants, I can work on the upgrade, but I need a MySQL dump :-)
02:54 russ he'll do it
02:55 russ the upgrade i mean
02:55 pierrick OK, thank you :-)
02:55 hdl hi russ and chris.
02:55 russ hi hdl
03:13 thd paul hdl: I was confused by what was said yesterday about normal (budget based) acquisitions being used by your libraries for the past two years.
03:14 hdl confused ?
03:14 hdl hi osmoze.
03:15 thd hdl: I am the one confused by paul's statement yesterday :)
03:16 hdl yes thd. I had read it.
03:16 hdl you said confused.
03:16 paul (still on phone)
03:16 hdl Is that because you think it doesnot work ?
03:16 thd hdl: do you have libraries tracking funds and payments to place and receive orders within Koha during the past two years?
03:17 hdl I am only one year old in Koha project. But yes I now some.
03:18 hdl I know some limits to acquisition management.
03:18 hdl But some ppl found it useful.
03:18 thd hdl: yes, when I tested for 2.2.3 I found some aspects of budget based acquisitions not working in the default intranet English templates.
03:18 osmoze hello
03:19 hdl pls detail.
03:19 thd hcl: chris had explained to me at the time that normal acquisitions had been broken since 2.X.
03:20 hdl (hcl stands for chlorydric Acid. :) )
03:21 hdl thd: I know they had some template for their clients that wouldnot break things.
03:21 thd hdl: I found that some pages for placing an order had been disconnected from the templates.
03:22 thd hdl: Also, I found that I could not complete receiving an order.
03:24 thd hdl: I found that an order once completed could not be found because the invoice number was never saved in the SQL tables or something like that.
03:25 thd hdl: During receipt of the order the quantities were not deducted from the original order.
03:25 hdl thd: I admit there are some real needs and some hard work awaiting us on taht module.
03:26 hdl It is normal not to be deducted from order.
03:27 hdl thd: what wouldnot be normal were if many receptions of One order was forgotten with keeping the latest reception.
03:28 hdl thd: item receptions must not be deducted but compared to order IMHO.
03:28 thd hdl: I do not remember all the problems with the acquisitions that I encountered as I had become satisfied with the answer chris gave about it having been broken for years and his clients were not yet using 2.X.
03:29 thd hdl: so it seems that you have some scheme for working around some difficulties
03:30 thd hdl: Your libraries are not troubled when a distributor sends only a partial shipment as is the usual case in my experience where some titles still remain to be sent.
03:32 thd hdl: how do your libraries track receipt of partial shipments without committing the partial receipt of the order?  I should have said that rather than deducted from the original order.
03:34 hdl thd: you won. ;) (They cannot manage it properly with the actual system.) They know We know. As I said, Work work and work again on Acquisition module.
03:36 thd hdl: yet enough of it does work for your libraries that they use it to provide some service instead of simply using acquisitions simple?
03:37 hdl thd: yes. And normal acquisition is required for serials management.
03:38 thd hdl: and you support and maintain it somewhat in its current partly working state?  I guess that I assumed serials management was an exception that would also work for simple acquisitions without using a budget.
03:40 thd hdl: That clarifies my confusion for acquisitions from yesterday.  I have one more question.
03:40 hdl thd: yes ?
03:42 thd hdl: paul had made reference to work being done to correct the UNIMARC framework.  Work that was 95% done.  Can you clarify for me what work that is?
03:44 hdl ask paul directly. :) (I think it is a work out of clients will and normalization will)
03:46 thd hdl: ok I will ask paul directly, I do not understand your usage of will just now.
03:47 thd paul: I have a question for you about your recent work on the UNIMARC framework.
03:47 thd paul: please let me know when you are off phone.
03:47 hdl thd : will stands for request or need compliance.
03:49 thd hdl:'request' or 'need' would have been understandable for me in English for that context.
03:50 hdl thd: sorry. English is only my second language. ;)
03:51 thd hdl: I know and English and French are much too tricky.  I can certainly see how anyone might think that will would apply and yet I was baffled by the usage :)
03:51 hdl And I thought will would denote also kind of commitment conotation.
03:53 thd hdl: your expectation was very reasonable about the usage of 'will' and yet if native users never imagine the word in that manner despite its possible application I could not think of what you had meant :)
04:01 thd hdl: language ought to be about conveying meaning where words are accepted for their meaning in whatever context the might possibly be applied.  Yet in practise human capacity for language is too limited so we become confused by anything outside a customary pattern because determining possible meaning takes too long to process for our little brains :)
04:01 thd s/the/they/
04:03 hdl :D
04:46 tumer Hi is paul around?
04:46 paul yes.
04:47 paul hello tumer
04:47 tumer Hi paul I have questions regarding 2.4 and UTF8
04:47 tumer have time?
04:47 paul throw it, although i'm not sure i'll be the best person to answer.
04:48 tumer By the way sorry I missed bug-squash I almost got squashed by a two legged bug
04:48 tumer are you using char_decode still
04:49 paul (kados could confirm)
04:49 tumer Well I think the char_decode has got some wrong coding in 2.2 thats why some german characters do not get converted.
04:50 tumer I have corrected them but dont want to commit it unless someone else tries it as well
04:51 tumer I are the person (MARC21)
04:51 tumer Who else uses above ascii?
04:51 paul I suggest you wait until joshua is back from it's bed to decide wether you commit or not.
04:52 paul most libraries I think, although very rarely for english ppl
04:53 tumer This new M::F::XML does not convert some of the Turkish chars from MARC-8 to UTF-8 so its out for me. I still have to rely on char_decode untill there is a fix
04:55 tumer Another problem we have to realise is that the new M:F:X is very sensitive , tries to be clever.
04:57 tumer In 2.2 when moving to 2.4 or 3.0 we have to make sure that all existing MARC records are UTF-8. Not only the chars but the leader as well otherwise everything breaks down
04:59 paul tumer: you're right = the updatedatabase tool will have to take care of this.
05:01 thd tumer: why doe s the leader need to be UTF-8 when it contains only ASCII values by definition?
05:02 thd tumer: the leader would never have multibyte characters.
05:02 tumer The leader position 10 has to say "a" if the MARC record contains any UTF-8 otherwise breaks. This does not happen with old MFX cause it does not care just passes anything it has as it is
05:03 tumer The new MFX tries to convert everything to MARC-8. HAve Phone. Out!
05:03 thd tumer: what I meant was that the 'a' and every other character in the leader is ASCII
05:04 paul thd: I think tumer means that the leader MUST reflect the fact that the biblio is in utf-8
05:05 thd paul: yes tumer the leader character encoding setting must reflect the character change and some fixes that kados applied for that purpose have sometimes broken.
05:07 thd tumer: the real problem is that people have records in their system where the leader specified encoding does not match the actual record content in a different encoding.
05:10 thd tumer: I have communicated to kados about systems which attempt to guess what starting encoding is actually used before conversion and then test as to whether the proposition about the possible encoding is true to overcome cases where the starting encoding is uncertain despite the setting of the record specifying a particular encoding.
05:12 thd tumer: characters past the ASCII range are of importance even in fairly monolingual English records in the US because the standard forms of proper names may use characters past the ASCII range.
05:14 thd tumer: I am in New York City where ASCII only would not be taken seriously by most any library.  The English only mono culture that infects large parts of the US is pleasantly absent in New York.
05:17 thd tumer: Also Spanish language material is becoming increasingly important throughout the US despite the false hostility towards immigrants expressed in the US Congress recently.
05:17 paul thd : tumer is disconnected
05:17 thd :)
05:18 thd maybe he will see the logs later
05:22 thd paul: so the question I had for you is what work were you referring to yesterday for recent UNIMARC framework corrections that you had mentioned were 95% done.
05:22 thd ?
07:02 paul hello Sylvinh1 !
07:02 paul l'espion marseillais.
07:03 ToinS salut sylvinho
07:09 thd paul: If you are back again, I will ask again.  What UNIMARC framework work have you done recently?  You had refereed to something yesterday that was 95% done.
07:10 Sylvinh1 bijour
07:16 paul i'm back thd
07:17 thd paul: Did you understand my question?
07:17 paul I was a little bit too quick when saying it's 95% done.
07:17 thd paul: so it is less than 95% done?
07:18 paul in fact, I have many frameworks, some that are small & interesting for libraries that don't want too much MARC, some are complete, but a little bit too much for some libraries.
07:18 paul for example, the framework used by IPT is really complete.
07:18 paul while the framework used by EMN is small & efficient, but incomplete.
07:19 paul the one in CVS is a small & efficient one, although 100 & other coded fields are not here.
07:19 paul I don't think i'll change anything to the CVS framework for instance.
07:19 thd paul: what is IPT?
07:19 paul Institut Protestant de Théologie (one of my clients)
07:21 tumer paul: As a framework expert can you suggest me 2 subfields to use internally for koha for LC indexing. like 090$c biblionumber?
07:22 thd paul: have you seen the work that I prepared for kados where we had extended the hidden parameter to allow support for very comprehensive frameworks without bringing the record editor to a halt generating an excessively large form?
07:22 paul tumer: no, I think you can use whatever you want (technically). If it's a marc21 question, then thd is a better source
07:23 paul thd: a little bit, although not completly
07:24 thd paul: I sent a copy to hdl.  I had not wanted to commit it until I had finished a few last things and verified ever little element again.
07:24 tumer paul: it has to be decided in general like the biblionumber so that if a library wants to use LC indexing 2 more fields that I'll add to biblioitems will have to reside
07:24 paul tumer: ???
07:25 thd tumer: i will answer you in one moment.
07:26 thd paul: what recent work is it that is that is less than 95% complete.
07:26 thd ?
07:26 paul the framework in cvs rel_2_2, default, for unimarc is incomplete.
07:26 paul while I have some that are complete, but too much for half of the libraries I bet.
07:29 tumer paul:For LC indexing I have to parse the classification into 2 parts. Alphabetic and numeric. (LC way of indexin) then use these 2 fields on LC sorts. If we leave this to each library we may have problems(or do we?)
07:30 thd paul: The design I developed with kados to support complete frameworks allowed preservation of any data that started in the record while providing just a carefully chosen set of subfields to be used for editing if it was not already present in the record.
07:30 thd paul: I also went back to more minimising than what you had seen.
07:33 thd paul: With a little extra work to support any adding subfields as needed at the time of record editing the default subfields present can be extremely small.
07:34 thd paul: Will you be committing the IPT frameworks or an even more complete version?
07:34 paul thd: it's not planned
07:35 paul (+ I made nothing yet to use your improvements on framework structure)
07:37 thd paul: so you are planning to commit some frameworks with some more fields and subfields than the existing frameworks but less than what I PT has?
07:37 paul no, I plan to do nothing.
07:37 thd paul: :)
07:38 thd paul: ok, I will plan something for your benefit then if you have no plan :)
07:39 paul :)
07:39 thd paul: I want to be certain that UNIMARC keeps up with recent improvements for MARC 21.
07:42 thd tumer: yu are trying to sort LC call numbers by dividing the leading letter class from the numeric and later parts of the classification both of which may start together in 050 $a?
07:42 thd s/yu/you/
07:42 tumer thd:yes I am doing that
07:44 thd tumer: there are some subtler issues about LC classification sorting but let me address the question that you just asked.
07:44 tumer thd: I have the script doing it on my system. Currently I am using 090$a and 090$b to hold these values. But to commit it I need some advice
07:45 thd tumer: you are looking for a good place to store those values which may or may not be 090 $a $b.  Is that your question?
07:46 tumer thd:yes. anddo they have to be pre-programmed or left to the library to decide?
07:50 thd tumer: 090 is a poor choice for Koha to use altogether because that is used by many libraries as the place to store LC call numbers in the world's largest library union catalogues.
07:50 tumer thd: but we already have biblionumber in there
07:52 tumer thd:Are we to change biblionumber to somewhere else at 3.0?
07:52 paul tumer: biblionumber can be anywhere.
07:52 paul it's in 090 by default.
07:52 thd tumer,: yes, blame NPL for not thinking ahead.  It could easily be changed because it is not hard coded so do not hard code 090.  However I have not selected a better place but merely recommended converting standard 090 usage to 09o with the letter 'o' as a temporary measure.
07:53 tumer paul: it is hard coded I thought!
07:54 thd tumer: it is only set by the setting of the bibliographic framework which I had just been discussing with paul
07:54 tumer OK sorry not hard coded.:(
07:55 tumer thd: all I am asking is we put it in at a place in the framework as default and let the user change it if they know what they are doing
07:55 thd tumer I have created a comprehensive bibliographic framework for MARC 21 which is not yet in CVS.  I could email it to you before I am liable to commit it.
07:55 thd tumer: yes I am looking now.
07:56 tumer thd:thanks
07:57 thd tumer my default bibliographic framework currently has the following default values for 090.
07:57 thd -- Original Record ID Field/Subfields
07:57 thd -- INSERT INTO `marc_tag_structure` VALUES ('090', 'KOHA DATA', 'KOHA DATA', 1, 0, '', '');
07:57 thd -- INSERT INTO marc_subfield_structure VALUES ('090', 'a', 'Koha Itemtype (NR)', 'Koha Itemtype (NR)', 0, 0, NULL, -1, NULL, NULL, '', NULL, '', NULL, NULL);
07:58 thd -- INSERT INTO marc_subfield_structure VALUES ('090', 'b', 'Koha Dewey Subclass (NR)', 'Koha Dewey Subclass (NR)', 0, 0, NULL, -1, NULL, NULL, '', NULL, '', NULL, NULL);
07:58 thd -- INSERT INTO marc_subfield_structure VALUES ('090', 'c', 'Koha biblionumber (NR)', 'Koha biblionumber (NR)', 0, 0, 'biblio.biblionumber', -1, NULL, NULL, '', NULL, '', NULL, NULL);
07:58 thd -- INSERT INTO marc_subfield_structure VALUES ('090', 'd', 'Koha biblioitemnumber (NR)', 'Koha biblioitemnumber (NR)', 0, 0, 'biblioitems.biblioitemnumber', -1, NULL, NULL, '', NULL, '', NULL, NULL);
07:58 thd -- Current Record ID Field/Subfields
07:58 thd INSERT INTO `marc_tag_structure` VALUES ('090', 'SYSTEM CONTROL NUMBERS (KOHA)', 'SYSTEM CONTROL NUMBERS (KOHA)', 1, 0, '', '');
07:58 thd INSERT INTO `marc_subfield_structure` VALUES ('090', 'a', 'Item type [OBSOLETE]', 'Item type [OBSOLETE]', 0, 0, NULL, -1, NULL, NULL, '', NULL, -5, '', '', '');
07:58 thd INSERT INTO `marc_subfield_structure` VALUES ('090', 'b', 'Koha Dewey Subclass [OBSOLETE]', 'Koha Dewey Subclass [OBSOLETE]', 0, 0, NULL, 0, NULL, NULL, '', NULL, -5, '', '', '');
07:58 thd INSERT INTO `marc_subfield_structure` VALUES ('090', 'c', 'Koha biblionumber', 'Koha biblionumber', 0, 0, 'biblio.biblionumber', -1, NULL, NULL, '', NULL, -5, '', '', '');
07:58 thd INSERT INTO `marc_subfield_structure` VALUES ('090', 'd', 'Koha biblioitemnumber', 'Koha biblioitemnumber', 0, 0, 'biblioitems.biblioitemnumber', -1, NULL, NULL, '', NULL, -5, '', '', '');
07:59 thd tumer sorry I guess that was a little too much for IRC it does not look bad in VIM with an non text wrapping view.
08:00 tumer thd: well it seems you have used a and b and left c & d intact
08:01 thd tumer: well that seems to show that $a and $b are obsolete.  I believe that they had once been defined for NPL and then that was changed.  I think what you want though is what I put in 942.
08:02 tumer thd:OK
08:02 thd tumer yes this is good ...
08:02 tumer I keep losing connection
08:03 thd tumer why is that?
08:03 thd what is the cause of your connection loss?
08:03 tumer thd:new to IRC I think
08:03 thd here comes some SQL with comments ...
08:04 kados hi tumer
08:04 tumer hi kados
08:04 kados hi thd
08:04 thd -- Current primary biblioitems Field/Subfields
08:04 thd INSERT INTO `marc_tag_structure` VALUES ('942', 'ADDED ENTRY ELEMENTS (KOHA)', 'ADDED ENTRY ELEMENTS (KOHA)', 0, 0, '', '');
08:04 thd INSERT INTO `marc_subfield_structure` VALUES ('942', 'a', 'Institution code [OBSOLETE]', 'Institution code [OBSOLETE]', 0, 0, '', 9, '', '', '', NULL, -5, '', '', '');
08:04 thd INSERT INTO `marc_subfield_structure` VALUES ('942', 'c', 'Item type', 'Item type', 0, 1, 'biblioitems.itemtype', 9, 'itemtypes', '', '', NULL, 0, '', '', '');
08:04 thd INSERT INTO `marc_subfield_structure` VALUES ('942', 'j', 'Location (call number prefix code)', 'Location (call number prefix code)', 0, 0, 'biblioitems.classification', 9, '', '', '', NULL, 0, '', '', '');
08:04 thd INSERT INTO `marc_subfield_structure` VALUES ('942', 'k', 'Classification base (DDC to decimal or LCC letter class padded after single letter classes with trailing 0', 'Classification base', 0, 0, 'biblioitems.dewey', 9, '', '', '', NULL, 0, '', '', '');
08:04 thd INSERT INTO `marc_subfield_structure` VALUES ('942', 'l', 'Classification subclass (DDC after decimal or LCC number after letters', 'Classification subclass', 0, 0, 'biblioitems.subclass', 9, '', '', '', NULL, 0, '', '', '');
08:04 pierrick hi kados
08:04 thd hello kados
08:04 kados morning pierrick
08:05 pierrick chris upgraded Bugzilla to 2.20.1 :-)
08:05 kados w00t!
08:06 tumer thd: yes thats what I wanted. Waiting for your e-mail tgarip@neu.edu.tr
08:06 paul (hello kados)
08:06 thd tumer: I have suggested 942 $j and  $k with suggestions about usage for your purpose.
08:07 kados hi paul
08:07 tumer thd: what other subtler issues with LC?
08:11 tumer kados: do you know that this new M:F:X does not convert all the letters to UTF-8. at least 2 turkish chars.
08:11 thd tumer: one thing s that the classification number can have elements past the decimal point after the letter class which can cause problems with sorting.  Even letters are sometimes present in the classification part before the cutter.
08:11 kados tumer: are those MARC-8 encoded turkish chars?
08:12 tumer kados:yes
08:12 thd tumer: Also the classification hierarchy is not strictly numeric after the letter class.
08:12 kados tumer: the mapping is provided by LOC
08:12 kados tumer: we must investigate whether they can update it
08:12 tumer thd:no problem if you pad with 0's and sort textually
08:13 kados tumer: i also found some native alaskan chars it doesn't handle
08:14 tumer kados:LOC web sýte has the chars defined. Like the one I just used
08:14 thd tumer: yes much padding required for the best proximate sort.
08:14 kados tumer: ok, so we need to tell the maintainers to update M::F::X
08:14 kados tumer: I will do this
08:14 tumer thd:If we do too much padding zebra slows down on updates
08:15 thd tumer: the fine detail of what has precedence when it matters can only be seen by a detailed examination of each of the various classification schedules.
08:16 tumer thd: I'll commit something and see what you think
08:17 kados tumer: a comment about your recent commit to HEAD
08:18 kados tumer: unfortunately, MARC::Record does not change the actual encoding of the record when you specify ->encoding()
08:18 tumer which one. I had a big car accident and in a bit of shock:(
08:18 kados oh no!
08:18 kados are you ok?
08:18 thd tumer: my recommendation for all such performance issues is that you create yet another subfield to store a numeric index number so that the calculation has already been done.
08:18 tumer Some 2 legged bug tried to squash me. Turned over with the car. I'am OK
08:19 kados yikes
08:19 kados tumer: the commits I'm speaking of:
08:19 kados +               $record->encoding('UTF-8');
08:19 kados and in Biblio.pm:
08:19 kados +       #Change MARC Leader to UTF-8 incase user did not set it.New M::F::XML is
08:19 kados +sensitive to this
08:19 kados +       $record->encoding('UTF-8');
08:20 kados I agree 100% if you suggest that MARC::Record _should_ be converting the charset
08:20 thd tumer: The index number number or rather sort number need not be numeric but simply has incorporated all the padding calculations into it.
08:20 kados tumer: but in fact, all that does is change leader position 9
08:20 tumer kados: this one makes sure that the leader of marc record is changed to saying that it is UTF-8. Necessary when moving from 2_2 to 3
08:21 kados tumer: however, a better way is to do this:
08:21 tumer kados:The new M:F:X requýres this or it assumes MARC-8 even if the record we created is UTF-8
08:21 thd tumer: Do the CPU intensive work in a batch process whenever and store a value so that there is much less work to do at query time.
08:21 kados tumer: my $xml = MARChtml2xml(\@tags,\@subfields,​\@values,\@indicator,\@ind_tag);
08:21 kados tumer: my $record=MARC::Record->new_from_xml($xml,C4​::Context->preference('TemplateEncoding'),​C4::Context->preference('marcflavour'));
08:22 tumer thd: thats why I'll add 2 more fieds to bibioitems to hold these vaues
08:22 kados tumer: this will change the leader position _AND_ encode the record correctly
08:23 kados tumer: sorry, my paste above is incorrect
08:23 kados my $xml = $record->as_xml;
08:24 kados my $newrecord = MARC::Record->new_from_xml($xml, C4::Context->preference('TemplateEncoding'​),C4::Context->preference('marcflavour'));
08:24 tumer kados: I'seen that
08:24 thd tumer: You need two subfield for the sort but maybe even forming a single value in advance from those two in yet another subfield to use for sorting at query time will be much faster for queries.
08:24 kados in that case, $newrecord has proper leader AND encoding
08:24 kados in the other case (where just ->encoding() is used), the leader is lying about the encoding which can be very bad
08:25 tumer thd: I tried that but a 25 digit long letter or number slows zebra sorting veeeery much
08:26 thd tumer: how do tow values to  sort from at query time have an advantage on that?
08:27 tumer kados: I'll look into this
08:27 kados tumer: you will need to create a new systempreference
08:27 kados tumer: if you haven't done so already
08:27 kados tumer: called 'TemplateEncoding'
08:27 tumer thd:I presort on 2 fields and call 2 fields sorted
08:27 kados tumer: which can store 'UTF-8'
08:27 kados tumer: it is also used in the tempaltes
08:27 kados tumer: (at least in rel_2_2 where I have been experimenting with this idea)
08:28 tumer kados: and only 'UTF-8'
08:29 kados tumer: my idea is to make sure that Koha converts from 'MARC8' to UTF8 _before_ a record enters the collection
08:29 kados tumer: so internally, everything is utf-8
08:29 thd tumer: i guess I miss something about what it means to presort two fields over presorting one field created from the two fields
08:29 kados tumer: but if we are missing mappings in MARC::Charset, we will have to update it
08:30 kados tumer: (if you can figure out how MARC::Charset works perhaps you could add them? :-))
08:30 kados tumer: (I've not had a close look yet, but hope to later this week)
08:32 tumer kados:exactly right. But pre 3.0 data is iso8859. updating database we change everything to utf8. Call MARCgetbiblio you have a record with utf8 data and wrong leader. I was merely trying to correct that problem. Before we do anything else we have to run ->encoding('UTF-8') on all records and update and create zebra etc.
08:33 tumer kados: I already looked MARC:charset it is a 350M big file and scary :)
08:34 kados tumer: (yes it is scary)
08:34 kados tumer: all you are doing with ->encoding('UTF-8') is updating the leader
08:34 kados tumer: you are not converting the encoding
08:34 kados tumer: a better way is to actually re-encode the records as I posted above
08:35 kados tumer: the above code fixes both the leader and the encoding
08:35 kados tumer: (though I agree that my expectation of ->encoding() would be to also change the records, but in fact, all it does is change the leader)
08:35 tumer kados: case dismissed:)
08:36 kados tumer: we could fix MARC::Record so that ->encoding() also fixes encoding
08:36 kados tumer: that would be ideal :-)
08:37 tumer kados: we have bigger issues I believe:)
08:37 kados tumer: which?
08:38 kados tumer: ahh, yes, we can get by without modifying the behaviour of ->encoding() by using as_xml() and new_from_xml(UTF-8)
08:38 tumer kados: I am playing a lot with zebra. Performance issues etc.
08:38 kados tumer: really!
08:38 kados tumer: could you post your discovery to koha-zebra?
08:39 tumer kados: creating  more than 4 sorts slowed down update times and created problems with me.
08:39 kados wow
08:39 kados tumer: you're running on windows, right?
08:39 kados i wonder if that's the problem
08:39 tumer kados:yes
08:40 tumer kados: it could be!
08:40 kados so unstable a platform
08:40 tumer but unfortunately so popular
08:41 kados IMO you could very quickly learn some linux skills and convert to linux for your Koha servers
08:41 kados it would save money in licenses and would be more stable
08:41 slef even quicker if we ever get koha.deb
08:41 kados slef++
08:41 tumer I have to keep on windows cause the university platfor is Windows
08:42 slef I've got a new server which should help test that... just need to bring it online :-/
08:42 kados nice
08:42 slef tumer: ah, the "you may only use a hammer, no matter what tool is needed" approach
08:49 owen I saw my doctor using a Windows tablet PC running an IE-based web app. I wondered what it was and whether it was cross-browser compatible
09:00 thd tumer: I sent you them message for the MARC 21 bibliographic framework
09:01 thd kados: I have had a syntax error from your eval usage in afognak2koha.pl
09:02 paul what is afognak2koha.pl ?
09:03 kados paul: just a MARC::Record script that adds holdings data to a batch of MARC records
09:03 thd paul: that is a migration script that I have been modifying for one of Liblime's customers
09:06 thd kados: I have not really looked properly at the issue for the eval usage.  I can just borrow some code from you modification of bulkmarcimport.pl which does at least work although the structure is different because only MARC data is being added in that case.
09:17 tumer kados: are you here
09:18 kados tumer: yes ... see the private message i sent you?
09:18 kados or am I sending it to the wrong person? :-)
09:21 pierrick can someone explain to me the database model on acquisitions? I don't understand the necessity of aqbasket :-/
09:22 paul pierrick: why ?
09:22 paul 1 aqbasket can contain X aqorders. and some infos in aqbasket are only here.
09:22 paul so the table can't be dropped I think
09:23 pierrick an aqorder can contain only one biblio?
09:24 paul 1 order is for 1 biblio, right.
09:24 paul it's the "order line"
09:24 pierrick so one basket has several order lines
09:24 paul yep
09:25 pierrick why on earth isn't aqorder called aqbasketline or aqbasket_item? ;-)
09:25 paul ask katipans ;-)
09:26 paul I must add that in koha 1.x, aqbasket did not exist & all basket-related lines were duplicated in aqorder
09:26 paul I've added aqbasket in 2.2, to add some DB consistency
09:27 pierrick OK, I understand, thank you paul
11:34 owen Did Bugzilla get upgraded?
11:34 kados yep, chris did it last night
11:35 owen It's got no style :(
11:35 kados heh
11:52 owen kados: would you say this bug has been fixed by your recent changes? http://bugs.koha.org/cgi-bin/b[…]w_bug.cgi?id=1030

| Channels | #koha index | Today | | Search | Google Search | Plain-Text | plain, newest first | summary