https://bugs.winehq.org/show_bug.cgi?id=55304
--- Comment #2 from Dmitry Timoshkov dmitry@baikal.ru --- (In reply to Hans Leidekker from comment #1)
Thanks for looking at this.
I have replaced QueryContextAttributesA(SECPKG_ATTR_KEY_INFO) with QueryContextAttributesA(SECPKG_ATTR_SESSION_KEY) after implementing the attribute for Kerberos and NTLM.
Thanks.
SSF is used by ldap_int_sasl_bind() to determine if the SASL security layer should be used at all. This page documents it as representing the key size for values > 1:
https://docs.oracle.com/cd/E19120-01/open.solaris/819-2145/sasl.intro-44/ index.html
libldap simply checks for a not zero ssf value: https://github.com/openldap/openldap/blob/master/libraries/libldap/cyrus.c#L...
I don't understand why the trailer size depends on the server since we have always depended on a constant trailer size. Is that because of a difference between Samba and AD?
I've tested with Microsoft and Samba servers and both of them use 60 for actual encryped data, and decrypting with 64 abviously doesn't work. In the same time QueryContextAttributesA(SECPKG_ATTR_SIZES) returns 64 for the connection.
I can find references for Windows changing from 12000 max token size to 48000 but not 48256. Where does it come from?
It comes from a test that calls QueryContextAttributesA(SECPKG_ATTR_SIZES) under Windows 10 for the Samba server.
Also note that your changes to sasl_decode() and sasl_encode() break NTLM.
Do you have a sample code that breaks that could be used for testing purposes?