Time  Nick       Message
23:41 owen       Got the files, thanks hdl
23:18 owen       hdl: that sounds good to me
23:04 hdl        Maybe I should send you the three files so that you can test.
23:04 hdl        owen : I tried to rebase and had conflicts on that file.
22:55 fbcit      atz: heh
22:53 hdl        owen : it seems that a commit on the same template adding some ui for tabs line 46 is making the patch fail to apply.
22:50 atz        it doesn't help them avoid a solution that is itself another bug  :)
22:50 gmcharlt   althought back to the bug report issue, raising the issue via bugs.koha.org there does make it easier for other interested parts to find the problem description and contribute
22:49 gmcharlt   yep
22:49 hdl        gmcharlt: sorry to bug you. But we also have to be able to describe the problem so that solution is ok for everyone.
22:47 gmcharlt   hdl: re your last to me - I understand completely - I can fall into the same trap myself
22:42 hdl        And it failed on the same error as you.
22:42 hdl        I tried to apply the patch on another git repository.
22:40 atz        hrm...
22:40 owen       How about you hdl?
22:40 owen       I just fetched and rebased and tried again, with the same results
22:40 atz        have you rebased recently?
22:36 owen       No, I don't have any outstanding changes to that file
22:36 atz        if you have useful edits, commit them first, then reapply
22:35 atz        and then try to reapply
22:35 atz        if you have useless edits to that file, then you can do git checkout koha-tmpl/intranet-tmpl/prog/en/modules/suggestion/acceptorreject.tmpl
22:35 atz        that usually means the version the patch started from and your current version don't match
22:34 owen       0001-suggestion-management-Improvements.patch:157: error: patch failed: koha-tmpl/intranet-tmpl/prog/en/modules/suggestion/acceptorreject.tmpl:46error: koha-tmpl/intranet-tmpl/prog/en/modules/suggestion/acceptorreject.tmpl: patch does not apply
22:33 hdl        owen ; has the patch applied ?
22:32 owen       But no other error messages
22:32 atz        for other code sets it might matter more
22:32 atz        yeah, there is an option to turn off those warnings
22:32 owen       I see stuff about trailing whitespace, but that seems typical
22:31 atz        permissions, etc.
22:31 atz        usually only held up by stuff like missing directories or files
22:30 atz        then it will give an error message as to why
22:30 owen       And if it doesn't? :)
22:29 atz        if it applies OK, that's all there is
22:29 owen       I'm not sure I'm doing it right... git-apply <path to patch> ? Is there more to it than that?
22:29 owen       Hi hdl, I'm just now getting a chance to try your suggestions patch
22:28 hdl        still around owen.
22:28 owen       hdl around? I know it's late...
22:19 hdl        gmcharlt: I will. But I donot like just to file bugs. I also want to be able to propose solutions, and even propose patches.
22:18 gmcharlt   so I still think it would be in our best interests if a bug were filed :)
22:18 gmcharlt   but this will need to be tested under both the MARC21 and UNIMARC options
22:17 gmcharlt   hdl: probably marcflavour in _parse_unlinked_item_subfields_from_xml is not needed or could be a constant MARC21
22:16 hdl        So that items information are saved without 100$a but when decoding, it is required.
22:16 fbcit      gmcharlt: there are the patches to correct column order :)
22:14 hdl        marcflavour is not used.
22:12 hdl        But when it comes to saving :
22:12 hdl        So that you are using 100$a (for UNIMARC, to decode XML) and provide information for editing items.
22:11 hdl        line 2020 : you write ;   my $marc = MARC::Record->new_from_xml(StripNonXmlChars($xml), 'UTF-8', C4::Context->preference("marcflavour"));
22:10 gmcharlt   hdl: again, if you have a bug, please provide a test case and report it - I will be happy to provide more explanation of what I was up to, but your providing concrete information of what is breaking would really help
22:08 hdl        Is this wrong ?
22:08 hdl        gmcharlt: I first want to analyse your process. And see what can be done to make it work for us.
22:07 gmcharlt   hdl: if you have a bug, please report it
22:06 hdl        gmcharlt: I would agree. If it wouldnot break on decoding those XML for UNIMARC for want of 100$a.
22:05 gmcharlt   hdl: this is intentional - the XML snippets used in that column are (a) always UTF-8 and (b) always integrated into biblioitems.marcxml for indexing
22:05 hdl        But decoding this xml piece, 100$a is required.
22:04 hdl        So that it doesnot need field 100$a.
22:04 gmcharlt   fbcit: in your deleteditems patch, the add of enumchron must be *before* copynum to preserve column order
22:04 hdl        But it doesnot uses C4::Context->preference('marcflavour') to generate this XML file.
22:03 gmcharlt   hdl: correct
22:02 fbcit      gmcharlt: I just added it to my db ver in updatedatabase.pl
22:02 hdl        And I see that it uses XML records for storing unlinked fields.
22:02 gmcharlt   fbcit: items.enumchron = volume statement, e.g., "v.10 (2004)"
22:02 hdl        gmcharlt: I am looking at the way items are stored.
22:00 fbcit      gmcharlt: you should have another patch adding another missing column to deleteditems
21:52 fbcit      gmcharlt: any idea what enumchron.items is?
21:38 hdl        Anyway, let us think now for zebra, and then for others.
21:38 hdl        but this would be zebra specific. And all the search engines should have special configuration....
21:35 hdl        But I think it is not thus.
21:35 hdl        ..... unless it is a mapping character to character.
21:34 hdl        Since you can define mapping characters.
21:33 hdl        And zebra can be configured so that it could be insensitive to NF.
21:33 gmcharlt   hdl: why don't you file a bug for this - will take a bit to research the Zebra options
21:33 hdl        this makes sense.
21:32 gmcharlt   and then make sure that query strings are put in the same NF before being submitted to Zebra
21:31 gmcharlt   then I guess we'll need to stick on a specific NF to use for MARCXML records when they are sent to Zebra
21:31 gmcharlt   if Zebra cannot be thus configured
21:31 gmcharlt   ideally, it would be nice to get Zebra to be insenstive to a specific NF, since Zebra can do NF changes faster than Perl (or Perl XS) code can
21:30 js         (hi all)
21:30 hdl        and maybe this is also a direction.
21:30 gmcharlt   hdl: that's going to require a two-pronged approach, possibly
21:30 hdl        It can be coped with via mapping characters.
21:29 hdl        since if you query é then you must be able to search for all forms of é in zebra.
21:29 hdl        (hi js)
21:28 hdl        I agree. But this is also a pain for searches :
21:27 gmcharlt   and not rely on any specific NF being used in the database storage
21:27 gmcharlt   but given history, I think that any code in Koha that relies (or would be made more convenient by) a specific normalization form, should do the normalization explicitltly using the appropriate Unicode::Normalize routine
21:26 gmcharlt   e.g., export NFKD for MARC records; use NFC for output to web browsers, etc.
21:26 gmcharlt   hdl: it would be a good idea for us to take some control over it
21:24 hdl        gmcharlt: yes.
21:24 hdl        And if xml records are not normalized, it can end beeing a mess to find la bête.
21:24 gmcharlt   hdl: are you referring to Unicode normalization forms, e.g, NFC, NFKD, etc.?
21:23 hdl        é è î can be encoded in two different ways.
21:23 gmcharlt   hdl: what do you mean, specifically?  everything should be in UTF-8 when it is stored in the Koha database.
21:23 hdl        (it could be important for diacritics :
21:23 hdl        gmcharlt: I would like to know if we consider Normalizing UTF8 before storing elements.
21:21 gmcharlt   hdl: what's your question?
21:20 hdl        ?
21:20 hdl        gmcharlt, atz, Is there someone who copes with cataloguing.
21:20 fbcit      hi chris
19:41 nengard    but I want more votes and I have a huge community to tap into :)
19:40 chris      hehe
19:40 nengard    I know!!! :) THANKS :)
19:40 chris      already voted
19:40 nengard    chris - not just of you - of everyone :)
19:40 nengard    Thank you!!
19:40 nengard    Send this to your friends and family :)  We only win a vacuum, but if we win the entire contest we get $5000 to give to the animal charity of our choice!!!
19:40 nengard    http://www.bissell.com/redirect.asp?page_id=47118&Pet=762 - Coda
19:40 nengard    http://www.bissell.com/redirect.asp?page_id=47118&Pet=767 - Beau
19:40 nengard    Hi all - sorry for this blatantly off topic post - but the voting ends today (March 11) and I want to donate the prize money to Sheltie Rescue - so it's a good cause :)
19:40 chris      yep?
19:40 nengard    got a favor to ask
19:40 nengard    I'm back
19:34 nengard    ttyl
19:34 nengard    well it's quitting time for me - so I'm off to clean the house :)
19:34 nengard    chris - very cool! I'm getting biblios and some patches :)
19:32 chris      ahh cool
19:31 nengard    chris: he's installing upgrades to my koha install :)
19:27 chris      morning
19:25 fbcit-away bbl
19:25 fbcit      gotta run
19:17 fbcit      gmcharlt: you should have them
19:04 gmcharlt   ok
19:04 fbcit      I'll not change kohaversion.pl and leave it to you
19:03 fbcit      that will work
19:03 gmcharlt   I'll sign off, deal with the DBVer conflict, and send whole package to patches@ by tomorrow late morning
19:03 gmcharlt   and actually, I have a better idea - send your patch to me directly
19:03 fbcit      and rubs his hands together greedily... :-)
19:02 gmcharlt   if you take 063, while that will still produce a technical conflict for the RM to deal with, the merge will be easy to resolve
19:02 gmcharlt   :)
19:02 gmcharlt   yes. it mine! mine! I tell you
19:01 fbcit      gmcharlt: you still holding claim to DB 062?
18:56 fbcit      for that reason I missed this issue when adding only one item of a particular bib
18:55 fbcit      it seems to me that the display should reflect an actual query
18:54 fbcit      it appears that the initial display of the item after adding it is based on form data rather than an actual query of the newly inserted record
18:54 fbcit      also, even
18:54 fbcit      alos
18:53 gmcharlt   yeah, expanding the size of items.booksellerid would fall more into an enh req
18:52 fbcit      I think the acqui issue is more of a feature req
18:52 gmcharlt   sure; please CC me on the patch
18:51 fbcit      right
18:51 gmcharlt   patch for 1927, you mean?
18:51 fbcit      gmcharlt: so I'll submit a patch to fix my bug?
18:51 gmcharlt   yep
18:50 fbcit      exactly
18:50 gmcharlt   fbcit++
18:49 gmcharlt   although since aqbookseller.aqbooksellerid is an int(11) and items.booksellerid is varchar(10), most likely not (or if there is code, it is obviously broken :) )
18:49 fbcit      as an addendum to your devel post: I think all relationships should be db enforced if possible
18:48 gmcharlt   I'm worried whether there's an implicit one that some code is trying to use or enforce
18:48 fbcit      ahhh... I forget about software enforced relations
18:48 gmcharlt   not an explicit one, no
18:48 fbcit      s/not/no/
18:48 fbcit      there is not existing FK between the items table and the acqui tables on booksellerid
18:47 gmcharlt   depends on how acqui populates items.booksellerid, and whether an existing code expects an implicit FK relationship
18:47 fbcit      if so, I'll submit a patch to address the issues I noticed and file a bug on the other
18:47 gmcharlt   it might
18:46 fbcit      I wonder if switching to a varchar(255)/mediumtext represents an acceptable transition to a total fix of both issues?
18:46 fbcit      but that is definitely broken at this point in any case
18:45 fbcit      I agree with the FK thought
18:45 gmcharlt   and a freetext source of acquisitions field
18:44 gmcharlt   i.e., key to acq vendor record, if material was purchased via Koha's acq system
18:44 gmcharlt   ideally, you'd want both
18:44 gmcharlt   but if the FK relationship is not intended, then varchar(255) or mediumtext would be OK
18:40 gmcharlt   because items.booksellerid, if it were the right type, might have been intended as a FK of acqbookseller
18:40 gmcharlt   fbcit: upon two minutes examination, looks complicated
18:36 fbcit      s/a/an/
18:36 fbcit      gmcharlt: any reason items.booksellerid should not be a varchar(255)? What if I'd like to enter a uri as the source of an item?
18:35 fbcit      which is really not dropped, it's just never inserted to start with
18:34 fbcit      items.copynumber does not exist which explains the "dropped" copy number
18:34 fbcit      for starters items.booksellerid is a varchar(10) which explains the truncation
18:28 fbcit      the items.copynumber appears to be missing in any form
18:21 fbcit      hrmm, additem.pl: DBD::mysql::st execute failed: Unknown column 'copynumber' in 'field list' at /usr/share/koha/lib/C4/Items.pm line 1752.
18:20 fbcit      I'll have a look for a minute to see if anything stands out
18:19 gmcharlt   fbcit: thanks
18:18 fbcit      1927
18:18 fbcit      I've opened a bug
18:18 fbcit      gmcharlt: the problem appears to be somewhere in the code that retrieve the item record and loads it into the form for editing...
18:09 gmcharlt   fbcit: write them up please - item editing wasn't supposed to be this unstable by this point
18:07 atz        I've seen some bugs reported... i don't know the current status
18:06 fbcit      atz: so basically editing an item record will mess it up with the current state of things?
18:06 atz        glad to hear it
18:05 gmcharlt   but I think biblio-level *only* is sufficient
18:05 gmcharlt   atz: I suppose item level tagging could be supported ("this is signed by Neil Gaiman himself!")
18:05 gmcharlt   atz: I'd definitely ignored biblioitems
18:03 atz        but possibly biblioitems and even items also?
18:03 atz        how many places is that?  biblios is the obvious one
18:03 atz        gmcharlt: regarding tagging... so the tags refer to wherever the catalog data lives
18:01 atz        encoding is still a problem
18:00 atz        yeah, the data does NOT make a round trip through editing w/o perturbation
17:56 fbcit      like the source of acquisition truncates some data, the copy number disappears for starters
17:42 atz        no idea there
17:42 fbcit      just addbiblio.pl as it currently exists
17:42 atz        YUI might be involved too
17:42 fbcit      not that
17:42 fbcit      opps
17:42 atz        i know he uses google gears which is my best guess for the flash uploader part
17:41 atz        you'll have to ask ccatalfo, I don't have biblios on mine yet
17:41 fbcit      atz: so are there flash elements in biblios?
17:39 gmcharlt   LCCN is unique per bib - if multiple bibs are originally catalogued by LC but also have the LCCN, that means that LC really, really screwed up
17:38 fbcit      tnx
17:38 gmcharlt   yes
17:38 fbcit      gmcharlt: if all vols have the same LCCN I assume there would only be a single bib?
17:37 gmcharlt   for a multi-volume set, such as a series of scientific monographs where each volume has its own title, often each will get its own bib
17:36 gmcharlt   for something like an encyclopedia, where the volumes don't have a separate title, one bib
17:36 fbcit      flash loads up when I go to add a new biblio
17:36 gmcharlt   fbcit: depends
17:36 fbcit      atz. yep
17:35 fbcit      gmcharlt: do multi volume works have MARC records for each vol or one record for all vols
17:35 atz        you mean in biblios?
17:35 atz        flash?
17:22 fbcit      what flash elements are used on addbiblio.pl?
15:52 fbcit      guess I'll have to keep an eye on it for a few days
15:51 fbcit      cron seems fine
15:33 atz        cron is picky, and easy to mess up
15:33 atz        are your cronjobs running OK?
15:32 fbcit      atz: I can't figure out why logrotate did not rotate those log files?
15:27 atz        np
15:27 fbcit      tnx
15:25 fbcit      atz: got it and /dev/sda1             4.0G  1.6G  2.2G  42% /
15:23 atz        (low entropy, lots of repetition)
15:23 atz        *ible
15:23 atz        log data is highly compressilbe
15:22 atz        then you can copy back the gz files
15:22 atz        so go ahead and gzip the logs to a different part, then remove the originals,
15:21 atz        so log-rotate can't happen b/c you can't add the gzip file to the same part
15:20 fbcit      8-O
15:20 atz        that's pretty huge
15:19 fbcit      1.2G    var/log/syslog
15:19 fbcit      1.2G    var/log/messages
15:19 fbcit      sorry... sort order was backwards
15:19 fbcit      opps
15:18 fbcit      20K     var/log/dmesg
15:18 fbcit      20K     var/log/dmesg.0
15:18 fbcit      32K     var/log/kern.log.0
15:18 fbcit      36K     var/log/auth.log.0
15:18 fbcit      44K     var/log/messages.0
15:18 fbcit      56K     var/log/exim4
15:17 atz        what's your biggest file(s) in /var/log/ ?
15:15 fbcit      so that looks normal as well
15:15 fbcit      not as I know of
15:15 atz        our /usr is 1.3GB
15:15 atz        are you running with DEBUG on or something?
15:14 fbcit      'hates' even
15:14 fbcit      atz: what does your /usr usage look like?
15:12 atz        a viewer for docs
15:11 atz        it's a pager, iirc, like nroff
15:11 fbcit      what is 'groff'???
15:11 atz        on our dev server, /proc is 897M, so  that seems about right
15:11 fbcit      1016K   usr/share/groff
15:09 atz        that's not even HDD
15:09 atz        proc doesn't make any sense
15:08 fbcit      claim the most usage
15:08 fbcit      1.1G    usr
15:08 fbcit      biblios:/# du -sh usr
15:08 fbcit      898M    proc
15:08 fbcit      biblios:/# du -sh proc
15:07 fbcit      I have 5 parts
15:07 fbcit       / is full
15:06 atz        ?
15:06 atz        or do you only have 1
15:06 atz        what partition is full?
15:05 atz        check  /home/zvpaodfiqef/warez/DVD_rips/*
15:05 fbcit      hehe
15:05 gmcharlt   I claim DB rev # 062
15:03 fbcit      I just can't figure out what's hogging it all
15:03 fbcit      it's definitely out of HDD
15:02 fbcit      atz: seems to me 4.0 G should be enough for a stripped down install of debian?
15:02 atz        df -h
15:01 atz        out of HDD?
14:56 fbcit      hrmm... out of disk space... apache has a 22M error.log
14:55 atz        shoot that horse.
14:36 hdl        thx gmcharlt
14:34 gmcharlt   hdl: logbot for #koha is now back