Time  Nick         Message
11:58 thd          slef: I believe that a significant share of people who actually use the public libraries have a computer system that is a few years old and often may not have fonts installed for UTF-8.
11:54 thd          slef: so really the main group of users with a problem are not Debian, Red Hat, etc. users with the wrong locale but legacy MS Windows and Mac users who do not have up to date software unless there are also problems with OSX.
11:51 slef         thd: www.ttllp.co.uk http://mjr.towers.org.uk/ http://owu.towers.org.uk/ http://www.gnustep.org/ probably some others
11:50 slef         I'll add a note of this discussion to the encodings page RSN
11:50 thd          slef: which are those?
11:50 slef         right, speaking of which, I guess I'd better get on with osCommerce updates
11:49 slef         you can often spot a quiet time because I start fixing my unpaid web sites ;-)
11:49 slef         thd: and no big security updates on debian or osCommerce
11:48 slef         thd: when I've not many contracts ;-)
11:48 thd          slef: when is quiet for you?
11:48 slef         thd: international fonts are a common thing for English-language developers to not get right first time, sadly. (GoboLinux's main developers are in Brazil IIRC)
11:48 slef         thd: I've had it working on Debian in the past, but Debian now has defoma and I've not checked how that works for this.  If someone reminds me at a quiet time, I'll build a test machine for it here.
11:46 thd          slef: maybe GoboLinux has special magic absent from Debian
11:46 slef         If the display is not correct on a similar system, then probably the fonts are misconfigured either in Firefox or fontconfig.
11:45 slef         thd: GNU/Linux (GoboLinux 012+Compiles)
11:45 thd          slef: what is your OS?
11:44 slef         thd: as moo then a c-circumflex.
11:42 thd          slef: but how did it display as you typed?
11:41 thd          yes: the main keys become dead and might be a little different but is much faster once you get used to not tripping over the dead keys
11:41 slef         So, it looks to me like utf8 web form gets sent utf8 input by firefox, even if the system locale is fubar.
11:40 slef         I think C4 89 is the correct utf-8 encoding of c-circumflex.
11:40 slef         thd: so it remaps the dead keys onto the main ones?  I think I have dead keys on AltGr+stuff near the enter key
11:39 slef         that last line should be test=moo[PERCENT]C4[PERCENT]89
11:39 slef         argh, the IRC client bites back
11:39 slef         test=moo04%89
11:39 slef         POST contents, if any:
11:39 slef         CONTENT_LENGTH = 14
11:39 slef         CONTENT_TYPE = application/x-www-form-urlencoded
11:38 thd          slef: the only with disadvantage us international is that you have to type a space after some common keys like double and single quotes or hold down the alt key for an xterm
11:38 slef         http://localhost/cgi-bin/test-cgi includes (amongst other lines):
11:38 slef         I submitted the form
11:38 slef         (c-circumflex doesn't exist in ISO-8859-1 IIRC)
11:38 slef         I typed moo then a c-circumflex into the text field
11:37 slef         I opened the http://localhost/envtest.html (the form) with Firefox
11:37 slef         which I added two lines to make it print the POST message body
11:37 slef         action is the Apache test-cgi script
11:37 slef         I put up a UTF-8 html page with a form method="POST" on it
11:36 slef         (erm, not utf8 fonts... iso-10646-1 fonts... my mistake)
11:36 slef         Firefox has been configured to use Unicode fonts for Unicode
11:36 slef         utf8 typing is available
11:35 slef         utf8 fonts are available
11:35 slef         X locale is wrong (ISO-8859-1)
11:35 slef         ok, here's the test I just did:
11:32 thd          slef: you may not find one
11:31 thd          us_int or something like that
11:31 slef         thd: what's its name?
11:31 thd          slef: I use the US-international keymap which is much easier than compose
11:30 slef         see what it does on a web form
11:30 slef         let me run a test before I fix my configuration
11:30 slef         My X locale is fubar, but Firefox displays utf8 input
11:30 slef         X's locale is wrong, so any X fonts are a bit off... things like Firefox is fine, though
11:29 thd          slef: the fonts only know what to display because of the application and Firefox does not inform them well when they are typing it reverts to locale settings for display of what is typed
11:29 slef         I just realised why some of my apps are displaying OK and some aren't
11:29 slef         hahahahah
11:28 slef         é
11:28 slef         thd: so to type e-acute, it would be leftShift+AltGr, then ', then e
11:27 slef         thd: often it's on left Shift+AltGr
11:27 slef         thd: I think it might be called Multi_Key properly
11:27 slef         thd: Compose is an XKeySym
11:27 slef         thd: why, if utf8 fonts are available and the application is displaying utf8?
11:27 thd          slef: is compose an application?
11:26 thd          slef it is but even if you had a keymap outputting the correct characters as you typed them it would look wrong on screen if your locale did not match
11:26 slef         thd: so, users would need utf8 fonts and a keymap that can type the characters (most can with Compose AFAIK) and then firefox can display/send it.
11:25 slef         thd: I thought locale was independent of xkb.
11:24 thd          slef: this only seems to work well for MS Windows, maybe OS X, and free OS users (having changed their locale in advance for the Free OS)
11:22 thd          slef: for this to work well applications need to be able to switch locales for their users
11:20 thd          slef: I have found no similar solutions for the free OS users except changing the locale
11:19 thd          slef: there are solutions for MS windows to create UTF text documents as you type
11:18 thd          slef: except how the key maps function and display typed characters depends on the locale setting
11:17 slef         thd: surely typing UTF-8 is just a matter of typing characters using whatever keymap one has?
11:16 slef         thd: ugh.  Got test results?  This sounds worth linking in to the encodings page, as it only mentions browser problems on output AFAICS
11:15 thd          slef: the user sees UTF-8 from Koha in the web browser but may not be able to type UTF-8 easily from the keyboard
11:14 thd          slef: not if the browser has a non-UTF 8 locale on a free OS
11:05 slef         so koha displays utf8 => browser sends utf8
11:04 slef         doesn't the browser send the content as whatever charset it thought the form page was?
11:03 slef         hang on a mo
11:00 thd          hdl: do Normes de Catalogage AFNOR for French cataloguing never encode names using the original language scripts from which the names originated?
10:58 hdl          Since kados had seen it and printed about this months ago on encodingscrathpad, which I had looked only at the creation, maybe he has it.
10:55 thd          hdl: but that does not get the module itself
10:54 thd          hdl: http://216.239.51.104/search?q=cache:PuCg8OdrP38J:dysphoria.net/2006/02/05/utf-8-a-go-go/+utf8cgi&hl=en&gl=us&ct=clnk&cd=1
10:54 thd          hdl: I like the UTF8CGI solution if it works.
10:53 hdl          No. And I cannot find it.
10:53 hdl          Non. Et je ne trouve pas de trace.
10:50 thd          hdl: do you have the module?  The author's UTF-8 A-Go Go is down
10:39 hdl          No.
10:39 thd          hdl: is that in CPAN?
10:34 hdl          thd : We could use UTF8CGI API that certifies UTF8 data from outside are marked as UTF8 ;)
10:32 thd          hdl: we could make a guess about CGI submitted encodings from the bytes passed and the web browser ID.
10:29 thd          hdl: we could have all the clients using an unfree operating system and running an unfree web browser
10:28 thd          hdl: we could ask the user to perform an encoding calibration test by typing some specified characters with each connection but that would be tedious for the user
10:26 thd          hdl: Windows uses UTF-16 internally for multibyte locales
10:25 toins        hdl: instead of changing any #!/usr/bin/perl to #!/ur/bin/perl -COE, you could use the environnement variable PERLOPT
10:25 thd          hdl: tumer has no problem for this because Internet Explorer will transmit UTF-8 encoded data to a page expecting it even if the locale is not UTF-8 on the users machine and can never be under MS-Windows to my knowledge.
10:22 thd          hdl: we need to force the browser to send UTF-8 to CGI or interpret what is sent and convert
10:21 thd          hdl: no scratch that maybe
10:19 thd          hdl: so you need three scripts to merge from
10:18 hdl          CGI is not utf-8 aware. So it doesnot mark utf-8 data as utf-8 to PERL. Then PERL reencodes utf-8 data to utf8²
10:17 thd          hdl: why had you "realized that I could not tell PERL to use UTF-8 Input since CGI is not UTF-8 aware", if "It gently display anything you pass him"?
10:14 thd          owen: I dislike any features which require using the pointer instead of the keyboard
10:14 hdl          It gently display anything you pass him.
10:13 thd          hdl: so how does CGI ever display UTF-8 outside of Latin-1?
10:13 hdl          But we then have to change any #!/usr/bin/perl to #!/usr/bin/perl -COE
10:12 hdl          And then PERL would have double encoded CGI Input.
10:11 hdl          thd: It changes Koha behaviour in so far as all variables will be converted UTF-8. I already realized that I couldnot tell PERL to use UTF-8 Input since CGI is not UTF-8 Aware.
10:10 thd          hdl: if it was easy it would not be as much fun
10:10 thd          owen: I only like drop downs that stay down without the pointer until a selection is made
10:09 hdl          Just kidding.
10:09 hdl          maybe sweating or swearing would have been better ?
10:09 hdl          )
10:09 hdl          (Yes when you have goose flesh :))
10:08 thd          hdl: chilling?
10:08 thd          owen: chilling?
10:07 hdl          thd : I want to think about chinese. But I have only 24hours a day. and testing takes time. Moreover when explaining three times the same thing, since people seems chilling as soon as we raise some true problems. ;)
10:07 thd          hdl: how does case 2 change all Koha behaviour?
10:07 owen         I'm not crazy about drop-down menus whether they're CSS-based or JS-based.
10:07 owen         thd: I believe slef said the default templates needed to be brought "up-to-date" because he's opposed to javascript-driven menus
10:06 thd          owen: I was merely correcting slef about which templates were older in this case
10:05 thd          owen: yes, I do not like JavaScript generally but the submenus are actually newer not that there was anything wrong without them
10:05 hdl          2) It doesnot change ALL Koha Behaviour.
10:04 thd          hdl: think of the poor Chinese users.
10:04 owen         Why do you think NPL templates outdated in the menu switching respect? Because they lack the drop-down menus?
10:03 thd          owen: I mean the drop down submenus in default.  Actually, I do not know what created them but I presumed JavaScript.
10:03 hdl          1) it needs few changes to code.
10:03 hdl          thd : I was proposing this because :
10:02 hdl          thd: About Case 2: CGI can be a problem if user input data with a non-utf-8 locale and if UTF-8 pages are "posted" with the user locales.
10:02 thd          hdl: you were just now proposing to use case one which seems dangerous unless you know that you are only dealing with French and ASCII?
10:01 owen         slef: what do you mean about :hover styles?
10:01 owen         thd: what do you mean about menu switching?
10:00 hdl          thd: seems yes.
10:00 thd          ?
10:00 thd          hdl: so there are only two cases currently
09:59 slef         thd: oh. I was hoping that NPL used CSS :hover styles.
09:59 hdl          yes.
09:59 thd          hdl: are you still there?
09:58 thd          slef: NPL templates outdated in the menu switching respect.  Or the JavaScript for that in default is newer than the previous design used by both.
09:56 slef         http://wiki.koha.org/doku.php?id=encodingscratchpad
09:55 slef         heh... time to bring default up-to-date
09:55 owen         Not in the NPL templates
09:54 slef         owen: for a possible example of needless javascript: are the intranet-main menus switched using javascript instead of css?
09:54 thd          hdl: Is case 3 Perl 6 fixes everything?
09:52 thd          slef: I think there is try searching for encoding in the wiki search box
09:52 slef         yep
09:51 thd          slef: do you mean in the Koha wiki?
09:51 slef         is there an encodings wiki page?
09:50 thd          hdl: I presume in case 2 that CGI will be no problem if Perl has not lost the encoding of the source data along the way.
09:48 thd          yes
09:48 hdl          thd : Have you understood ?
09:47 hdl          Or 2) Make PERL utf8 aware AND try and get DBI UTF8 aware for display and cope with CGI entries as such hoping they always be utf8.
09:45 hdl          1) keep PERL not utf-8 aware and REencode data from xml records to utf8, hoping there will be no data loss.
09:44 hdl          So we have those solutions :
09:44 hdl          OK. If PERL is utf-8 aware. Since DBI and CGI are not. data RISKS to be double encoded.
09:43 thd          yes
09:43 hdl          Do you understand the point ?
09:42 hdl          So unless you make PERL utf-8 aware, you cannot treat xml records truly as utf-8.
09:41 hdl          yes.
09:41 thd          hdl: yes that Perl treats everything as Latin -1 unless told otherwise?
09:40 hdl          Do you understand the first fact ?
09:40 hdl          let me explain to the end and read.
09:39 thd          hdl: including Klingon?
09:39 hdl          thd: i explain.
09:39 hdl          The fact is that, getting zebra record as xml if you donot turn PERL utf-8 aware provides you magically with latin1 data.
09:39 thd          hdl: do you not want Koha to work for every language?
09:38 hdl          I have no slightest idea.
09:38 hdl          I mean. I am trying to get zebra working.
09:38 thd          hdl: what will the Chinese Koha users do?
09:37 hdl          thd: NO For JEE's sake.
09:37 hdl          In MARCdetail.pl, line 290, adding use Encoding; Encoding::from_to($value,"latin1","utf8");
09:35 thd          hdl: what if you were storing Chinese in your MARC record?
09:35 thd          hdl: you mean because your MARC record data started as Latin-1?
09:34 hdl          thd: the manipulation was on marcrecord data not on Mysql data.
09:33 thd          hdl: Although, If it requires conversion to Latin 1 it would not work for Chinese in MySQL.
09:33 hdl          (for display)
09:33 hdl          thd: it worked well.
09:32 hdl          thd: Thinking over, it would probably the most HARMLESS solution.
09:31 hdl          But I consider it as inelegant since it supposed a manipulation utf8 data magically converted to latin1 by PERL and converted back to utf8.
09:31 thd          hdl: what happened when you tried?
09:30 thd          ?
09:30 thd          hdl: why can you not capture the data in separate scripts and merge to one standard method after Perl knows the encoding of the source data.
09:29 thd          hdl: the problem you report is that setting binmode for the whole script fixes encoding for one data source but breaks it for another
09:28 hdl          That is also a solution I tried.
09:28 hdl          And we HAVE to manipulate PERL data through the XMLrecord for displayind marcrecords.
09:28 thd          hdl: actually only two scripts should be needed
09:27 thd          hdl: why not use two separate scripts for capturing the data and then merge with a third script
09:26 hdl          if PERL is not UTF8 aware sorry
09:26 hdl          If PERL is not PERL aware, and manages UTF-8, display will be broken.
09:25 hdl          But, as soon as you manipulate PERL data and display those data.
09:25 hdl          Since there is no perl control over the data.
09:25 thd          I do not mean for your tests but for production systems
09:25 hdl          Currently, in PURE Mysql, everything works just fine.
09:24 thd          in SQL frameworks currently?
09:24 hdl          and to get it right.
09:24 hdl          No. I am trying to use utf-8.
09:24 thd          hdl: were you using ISO-8859?
09:24 hdl          (at the moment)
09:23 hdl          thd : labels are contained in mysql tables.
09:23 hdl          So we 'french' but also other non-english languages would have to recode all the Mysql entries.
09:23 thd          hdl: I was only referring to labels not to record content
09:22 thd          hdl: if you are only concerned about framework labels why are HTML entities not a sufficient solution even if they are not an Ideal solution
09:22 hdl          thd: librarians would never like to search for Benoît typing Benoît.
09:21 hdl          thd: then it is not simply a matter of escaping.
09:21 hdl          thd: If framework data from Mysql is badly displayed then, any data from mysql will be. Do you follow ?
09:21 thd          owen: only if JavaScript is faster and better not just because you can
09:20 owen         to me the Intranet is another matter. I think we can justify requiring librarians to have javascript enabled
09:19 thd          owen: you answered as I asked :)
09:19 thd          owen: which JavaScript is unobtrusive?
09:19 owen         Javascript that enhances where possible, but doesn't exclude
09:19 owen         slef: I subscribe to the philosophy of unobtrusive javascript when it comes to the OPAC
09:18 thd          hdl: I mean use é instead of ? in an SQL framework.  Of course XML frameworks may be better
09:17 slef         owen: javascript-free, I hope ;-)
09:17 thd          hdl: hTML entities display fine for me in UTF-8 as long as the record editor does not need to edit them.  The record editor should only need to edit the contents of the fields and subfields not the labels
09:16 hdl          Sorry ?
09:14 thd          hdl: but if you use HTML entities in the frameworks then you should not have a problem for mutibyte characters for Latin language set frameworks at least.
09:13 hdl          thd: But I am waiting for some time to think it through and try some xsl transforms in order to make them handy both for input and output.
09:12 hdl          thd: I can propose a dtd for frameworks.
09:12 hdl          thd: But this is another developement to go through.
09:11 hdl          thd: This is another reason to go to XML frameworks.
09:11 hdl          thd: I was looking at frameworks data display along with record data.
09:10 hdl          thd: to answer your question.
09:10 hdl          We may be up to add a good xml marc record on our own. (Long, but possible)
09:09 thd          hdl: what UTF-8 data do you contemplate adding from MySQL instead of merely Zebra alone?
09:09 hdl          thd: AND we can control utf8 compliance of data provided.
09:09 hdl          thd: Since we are the ones that code addbiblio.
09:08 hdl          thd: BUT.
09:08 hdl          thd: In our addbiblio.pl, we still use MARC:File:XML and therefore MARC::Charset to input a new biblio.
09:08 owen         I've been working with kados on a new design for the OPAC
09:08 thd          hdl: I have reread your original UTF-8 koha-devel list message carefully and I see the key point which I had previously not grasped well enough from my own lack of sleep at the time.
09:08 owen         that's quite a bit of new
09:07 slef         What's new with you?
09:05 slef         Still wondering about sprinting on Makefile.PL and a web installer to try to get it into 2.2.6 instead of Install.pm, but I think 2.3.0 is a more realistic aim.
09:04 slef         Got a referral from paul for a koha demo
09:04 slef         I broke Install.pm and then fixed kohabug 1154
09:03 owen         Hi slef, what's new?
09:03 slef         hi owen
09:00 slef         anyone else here got SIP(VoIP)?
08:58 slef         yay
08:58 slef         "Bugzilla has suffered an internal error."
08:56 thd          hdl: he stated that ignore_errors(1) reports the error and deletes only the offending character
08:55 thd          hdl: However, if you did have kados reported a couple of hours ago behaviour for ignore_errors(1) is not documented in the man page.
08:55 thd          kados: However, if you did have kados reported a couple of hours ago behaviour for ignore_errors(1) is not documented in the man page.
08:53 thd          hdl: MARC::Charset is of little value to you if you have no MARC-8 data.
08:51 slef         fixed
08:47 slef         oh crap
08:47 slef         cvs commit: warning: file `misc/Install.pm' seems to still contain conflict indicators
08:44 thd          hdl: not reporting is also problematic
08:43 thd          hdl: I tend to not report if I cannot report in sufficient detail but my idea of detail is at least two centuries behind the current culture
08:40 hdl          ok.
08:40 thd          hdl: he also has not been sleeping enough to be alive now
08:40 hdl          thd: we all do that sometimes. Especially when it bothers us ;) But sometimes, i would prefer that he took as much patience as we do when he reports bugs that he consider as blocking.
08:38 thd          hdl: he uses mutt as a mail reader which is fine but makes concentrating on more than the briefest message very difficult without a better typography in a GUI to aid the reading.
08:35 thd          hdl: kados often does not have or take the time to read messages as carefully as he might
08:31 thd          slef: which would need an add comments subject line command
08:29 hdl          Had he read my mail to koha-devel, he would have seen that I was out of any base. But simply testing some basic features at atomic level.
08:29 slef         thd: yes, or even just add comments to the bug report
08:29 thd          or subject line?
08:28 thd          slef: you mean with commands in the message body?
08:28 slef         thd: does it let me manipulate bugs by email?
08:28 thd          slef: are you not subscribed to the bugs list?
08:27 slef         slef: test
08:27 slef         hi hdl
08:27 slef         we need an email-based bug tracker ;-)
08:27 hdl          hi slef.
08:26 thd          hdl: kados had imagined earlier that somehow your data was not valid UTF-8 and that was the source of your problems
08:26 hdl          But I thought that koha-3.0 was stable.
08:26 hdl          missing correct leader seems to be the problem.
08:26 hdl          I merely report things and try and find a solution.
08:25 hdl          thd: Now, I try and add utf-8 data to zebra and fails.
08:25 hdl          thd: For pure data display. I found a workaround I exposed in my mail to koha-devel.
08:23 thd          hdl: I see so the problem is you cannot designate the encoding before Perl has mangled it from Zebra?
08:23 hdl          I wonder how tumer coped with this.
08:23 hdl          THAT mix PERL process and untainted PERL utf8 MYSQL data is giving problems.
08:22 hdl          (PERL interpreted)
08:21 hdl          since zebra records are processed in some ways before being displayed.
08:21 hdl          But with zebra, it is different.
08:21 hdl          And when getting data and displaying them, they are not "PERL" interpreted.
08:20 hdl          to database connection.
08:20 hdl          So what I had to do was setting Name=utf-8
08:20 hdl          o6 is rel_2_2 version and data only comes from Mysql.
08:19 thd          hdl: what did you do to resolve that problem if not designate the string as UTF-8 before passing it on to the template or HTML?
08:18 hdl          s/athroponymes/Anthroponymes/
08:18 hdl          and you will see.
08:18 hdl          look at o6.hdlaurent.paulpoulain.com and search for Egypt in athroponymes
08:17 hdl          It is.
08:17 hdl          This problem I coped with and authorities are now clearly and simply integrated and displayed.
08:17 thd          hdl: I know you wer not speaking of it now but that problem was never resolved was it?
08:16 hdl          I am not speaking of ancient authorities display in firefox.
08:15 thd          hdl: that had given you uncomposed characters in Firefox even if they were the correct byte codes I believe.
08:15 hdl          I am just reporting things that are blocking for us. We cannot tell our clients ; It is utf-8 compliant provided that you use only non-Mysql utf-8 data.
08:14 thd          hdl: I believe that may have caused a display problem for using authorities to fill fields in the bibliographic record editor when the authority value contains UTF-8 double byte characters.
08:12 hdl          thd: That is a HUGE work... and bugs can still be badly hidden, unless we use a good API or good modules that cope with it and use ONLY these modules in our code.
08:09 thd          hdl: I believe that every IO operation may require blessing the data as UTF-8 from earlier findings about how to use UTF-8 data correctly in Perl.
08:07 thd          hdl: I had equated not producing with killing
08:06 hdl          So no need to kill it.
08:06 hdl          It does not produce leaders in head.
08:05 thd          hdl: it worked fine recently without killing leaders in the record editor for MARC 21 in rel_2_2
08:04 thd          hdl: is it killing leaders in head?
08:03 hdl          line 445
08:03 hdl          in addbiblio
08:03 thd          hdl: when is MARChtml2xml invoked?
08:02 hdl          I was aware of this but did not notice there was none.
08:02 thd          hdl: yes a leader is very necessary
08:01 hdl          I tried adding a utf-8 data but since MARChtml2xml donot produce a valid xml MARC record (no leader), it fails.
08:00 thd          hdl: kados has left for a meeting and will probably be out much of the day
08:00 hdl          UNIMARC or USMARC is not the problem.
07:59 thd          I mean for your current tests where MARC::Charset gives problems?
07:59 hdl          I was just trying to add a simple record into my database. And miserably failed at it.
07:58 thd          hdl: are you using UNIMARC records?
07:47 hdl          I will try adding a leader with a as 8th character.
07:47 hdl          Seems my error comes from the fact there is no leader created using MARChtml2xml.
07:46 hdl          kados: read your mail on koha-devel.
07:43 dewey        hdl, I didn't have anything matching i am
07:43 hdl          dewey: forget i am
07:43 dewey        hdl, I didn't have anything matching i
07:43 hdl          dewey: forget i
07:43 dewey        you are here
07:43 hdl          dewey: who am i
07:42 dewey        i already had it that way, hdl.
07:42 hdl          kados: i am here
07:33 thd          and I rebuild it every week at least
07:32 thd          kados: I know that you are gone now but that behaviour for ignore_errors(1) is not documented in the man page.
07:23 kados        ok, I've gotta run ... talk later
07:23 thd          kados: really so it does not really ignore them completely
07:23 kados        rather than the whole subfield
07:23 kados        thd: it will only report an error, and will remove the offending character
07:22 kados        thd: that's what it does already
07:21 thd          kados: I will have to ask Ed Summers for MARC::Charset->ignore_errors(2) which reports errors but does not lose the subfield.
07:18 thd          hello hdl
06:39 thd          yes
06:38 kados        thd: are you present?
06:20 kados        hdl: you there?
04:32 chris        ahh i think he has posted to the list before .. ill look
04:31 hdl          I would have sent him another barcode.pl
04:31 hdl          A shame I donot have Any email for him.
04:29 chris        hi hdl, i think qiqo is from the philipines .. or i might be confusing them with someone else
04:27 chris        back
04:00 btoumi       hdl: chris est pas la il va revenir normallement
04:00 btoumi       hdl: ah ok
03:57 hdl          chris do you know who qiqo is?
03:56 hdl          chris there ?
03:56 hdl          Mais comme je ne le connais pas...
03:56 hdl          Eventuellement lui envoyer un mail.
03:56 hdl          Je voulais un peu discuter de son problème de code barre.
03:55 btoumi       hdl: non pourquoi?
03:55 hdl          hi thd
03:54 hdl          btoumi ? tu connais qiqo ?
02:22 qiqo         aanybody who has other views on this matteR?
02:16 qiqo         what shall i do ...
02:14 qiqo         :(
02:14 qiqo         so basically, the barcode system wont work?
02:13 hdl          And maybe there is a hack to get the good barcodes. I don't remember.
02:12 qiqo         yes,, i have 0.33? not 0.3r77?
02:12 hdl          barcode.pl is a quite old module, which only works with PDF::API2 version 0.33r77
02:11 qiqo         im using 2.2.5
02:11 qiqo         hmm..
02:11 hdl          qiqo: I don't think so.
02:10 btoumi       hi hdl
02:10 qiqo         do i need to get the barcode module using cvs?
02:10 qiqo         and another question how do i enable printing labels?
02:10 hdl          hi btoumi.
02:10 hdl          yes.
02:10 btoumi       hi all
02:09 dewey        there is probably a minor diff in <div>s, that I missed
02:09 qiqo         hdl:  still there?
02:07 qiqo         how does that happen
02:07 qiqo         like for example i assigned 00001 for a book,, when i printed into a pdf the codes, the code becomes 000000000017
02:04 mohamedimran any update on my ldap query
02:04 mohamedimran hi hdl
02:03 qiqo         when i create the pdfs.. the bardcode that i assigned when i catalogued a book seemed different
02:02 qiqo         yes,, i am having some problems with barcod printing
02:02 mohamedimran ya
02:02 qiqo         hehe
02:02 qiqo         can we speak in english now?
02:01 hdl          les barcodes ne marchent pas : que veux-tu dire par là ?
02:01 hdl          hi qiqo
02:00 qiqo         allo??
01:57 qiqo         les barcodes ne marchent pas
01:56 qiqo         j'ai un probleme avec koha.. huhu
01:56 qiqo         cava?
01:55 dewey        salut, mohamedimran
01:55 mohamedimran hi
01:55 qiqo         hi mohamedimran
01:40 qiqo         when will 2.2.6 be available
01:33 qiqo         anybody home?
01:33 qiqo         hi
23:07 chris        not anyone here i dont think
23:06 rychi        can ayone with an updated rel_2_2 verify that /cgi-bin/koha/admin/marc_subfields_structure.pl looks normal?
23:06 rychi        i get the same behavior in default and npl.
23:03 dewey        chris: I forgot which templates
23:03 chris        dewey: forget which templates
23:02 dewey        chris: I forgot npl ones
23:02 chris        dewey: forget the npl ones
23:02 dewey        the npl ones are not.
23:02 chris        the npl ones?
23:02 dewey        which templates are OK for dev_week ? npl ?
23:02 chris        which templates?
23:01 chris        i havent worked on it /looked at it lately
23:01 chris        umm as far as i know it should
22:57 rychi        the change seems to be with this escapeHTML stuff.
22:57 rychi        I am getting a wacky 'hidden' field ... it has some html in it, rather than a tinyint .
22:56 rychi        The rel2_2 marc_subfield_structure editor should work, correct?
22:51 chris        yep, fire away ryan
22:47 chris        back
22:38 mason        too late ryan, he's off :)
22:33 rychi        hi chris.  care to answer a question when you're back?
22:33 chris        quick walk in the sun, be back in 15 mins or so
22:32 chris        hi ryan ... im just on my way out
22:32 rychi        hello koha people
13:07 thd          kados: which URLs were you expecting?
13:07 thd          kados: I believe that 856 $u appears.
13:06 thd          kados: some getMARCurls should be in the default view if I remember correctly.
12:56 shedges      Arabic's a bitch!
12:56 kados        sweet
12:56 shedges      been working on kohadocs index page, making lots of little changes to make it validate
12:55 shedges      cool!
12:55 kados        shedges: btw: I managed to rebuild NPL's leader data with no loss
12:55 shedges      hey
12:55 kados        shedges: afternoon