Last year I left the I18N team at work. Although I hope this doesn’t mean anything — I can still do some work in my spare time as other people do, you might have observed a decline in my activity in that area. In any case, I would be more than happy if someone could help with the projects, e.g., gettext, libunistring, GNOME Characters, input methods by taking over the pending tasks (or the maintenance). Thank you.
Ramen is undoubtedly one of the most popular fast food in Japan. When I was leaving the country, I was sure that I will be missing it. So I purchased a dedicated machine to create the noodles. It is a simple, manually operated machine (like pasta makers), but has multiple functions that I needed some training to get used to.
Anyway, here are some outcomes.
The good thing here is that I can easily get quality flour in supermarket. There are actually many types of flour: wholewheat, buckwheat, etc. so I can create any types of Japanese noodles in theory. One thing missing here, however, is the Chinese magic powder that makes noodles chewy.
The broth can be made from chicken legs or pork bones which are also easy to find, and we have great mushrooms here in Czech Republic.
On weekends I enjoy trying to cook different types of noodles. I created Soba today:
It was a bit of a challenge, because buckwheat flour tends to dry out quickly.
A few days ago, we RH i18n team had a lightning talk session using a TV conference system. Unfortunately, the system was non-free and not privacy aware. So I presented the lowest priority topic among my public todo items — a data format which efficiently represents the Unicode character database (UnicodeData.txt, note: 1.4MB) while providing flexible search functionality. Actually, though there are similar libraries already, few of them provide partial keyword matching.
I showed a simple algorithm using two suffix arrays, along with the size estimates. Today, I’ve prototyped it in Python as mental gymnastics. For those who might be interested, here is the code (and also a bit modified slides).
It can be used like this:
$ ./build.py UnicodeData.txt $ du -ah names.* words.* 208K names.data 72K names.id 284K names.sa 100K words.data 32K words.id 204K words.sa $ ./search.py PROLO KATAKANA-HIRAGANA PROLONGED SOUND MARK HALFWIDTH KATAKANA-HIRAGANA PROLONGED SOUND MARK $ ./search.py 'OF P' SYRIAC END OF PARAGRAPH SLICE OF PIZZA PILE OF POO END OF PROOF
他に msgfmt と msgattrib の改良、Mac OS X 上での setlocale() のバグ修正、mingw (mingw-w64 ではない) 向けクロスコンパイルの修正、地味なところでは、新しい automake との親和性向上やテストの並列実行対応などが含まれています。詳しくはアナウンスをご覧ください。
This week, I’ve released a few packages: GNOME’s Caribou, GNU gettext, and Ruby gpgme gem. These are all minor updates to the previous releases and do not include any interesting features, but if you have faced build issues or unexpected crashes with the previous versions, it might be good to give it a try.