http://bugs.winehq.org/show_bug.cgi?id=10660
--- Comment #32 from Dmitry Timoshkov dmitry@codeweavers.com 2008-02-11 20:59:49 --- (In reply to comment #30)
If the latin1 bit is not set on a normal font then windows won't use it for anything useful.
This is not true. There are other useful unicode ranges in the fonts besides Latin1.
So FontForge pretty much always sets this bit when outputting normal fonts. When outputting symbol fonts it does not set this bit as it isn't applicable.
What if the font developer creates a font with several unicode ranges, including symbol?
The root of the problem, as I keep saying (five times? six?), is that fontforge no longer generates a 3,0 cmap entry for marlett.sfd (unless you specifically request a symbol encoding). This has a number of implications, including the way the OS/2 code pages are defaulted.
Font character mappings and unicode ranges are different things, not related to each other. It's perfectly legal to have a unicode character map, and simultaneously have a symbol unicode range in the font.
If you don't want fontforge to default the setting of the code pages/unicode ranges then you can set them explicitly in Element->Font Info->OS/2->Charsets.
As Ove mentioned, that doesn't work for some reason.
No this doesn't count as a fontforge bug. If you believe you have a fontforge bug please report them on the fontforge mailing list. The wine bug-tracker is not an appropriate place.
This bug a special. I has been closed as invalid, but reopened later to better understand the problem.
That still doesn't answer the question why fontforge now sets Latin1 *and* Symbol bits in the ulCodePageRange1 fileld in the OS2 TrueType header, while previously it only set the Symbol one.
When I use fontforge to generate a truetype font with a symbol encoding it does *NOT* set the latin1 bit. Open("marlett.sfd") Generate("marlett.sym.ttf")
My impression is that the font is being created based on the information in the font file. If there is a need in a hack to specify font encoding using file extension (what if a developer needs several of them in the single font?) then this looks like a limitation of the file format, and should be considered to be fixed.
Again I'd like to point out that .ttf -> .sfd -> .ttf path leads to creation of a not compatible font to an original one, regardless which unicode ranges the font contains.
Also, there is a thing called backwards compatibility. A behaviour of fontforge that was valid for years now suddenly called broken, making previously valid .sfd files useless.
As I pointed out in my previous post marlett is not a valid sfd file. It claims an encoding (adobe symbol) which it does not have. There seems to be an assumption that FontForge's symbol encoding (which is Adobe's) means the same as the symbol cmap type. That is not the case. The behavior you are depending on was never documented. Now can we please leave this topic? The old behavior was wrong. Marlett.sfd is mildly wrong. You have a fix which works.
I'm sorry, but I don't think that a hack with a file extension qualifies as a fix. I'd prefer to have a real solution which doesn't prevent adding other unicode ranges to marlett in future.