http://bugs.winehq.org/show_bug.cgi?id=59544 --- Comment #4 from Robert-Gerigk@online.de --- All three functions process the input WCHAR array character by character without checking for surrogate pairs: 1. font_GetGlyphIndices (line 4027): UINT glyph = str[i]; Each WCHAR is passed directly to get_glyph_index(). For a surrogate pair like U+1F600, str[i]=0xD83D and str[i+1]=0xDE00 are looked up separately. Neither is a valid codepoint, so both return .notdef (glyph index 0). 2. get_total_extents (line 5529): get_glyph_bitmap(hdc, str[i], flags, aa_flags, &metrics, NULL); Used to calculate text bounding boxes. Each surrogate half gets its own metrics (the .notdef width), so the total extent is wrong (two .notdef widths instead of one emoji width). 3. nulldrv_ExtTextOut (line 5720): get_glyph_bitmap(dev->hdc, str[i], flags, GGO_BITMAP, &metrics, &image); The actual bitmap rendering path. Each surrogate half is rendered as a separate .notdef square, producing the visible "two squares per emoji" artifact. They are related because all three need the same fix: detect high surrogates (0xD800-0xDBFF), combine with the following low surrogate to form the full codepoint, then pass that to the glyph lookup. The low surrogate position needs special handling (same glyph index / zero-width advance / skip rendering). My merge request (!10413) addresses all three in a single patch since they share the same pattern. -- Do not reply to this email, post in Bugzilla using the above URL to reply. You are receiving this mail because: You are watching all bug changes.