https://bugs.winehq.org/show_bug.cgi?id=55304
--- Comment #6 from Hans Leidekker hans@meelstraat.net --- (In reply to Dmitry Timoshkov from comment #2)
(In reply to Hans Leidekker from comment #1)
SSF is used by ldap_int_sasl_bind() to determine if the SASL security layer should be used at all. This page documents it as representing the key size for values > 1:
https://docs.oracle.com/cd/E19120-01/open.solaris/819-2145/sasl.intro-44/ index.html
libldap simply checks for a not zero ssf value: https://github.com/openldap/openldap/blob/master/libraries/libldap/cyrus. c#L745
That's an implementation detail we shouldn't rely on.
I don't understand why the trailer size depends on the server since we have always depended on a constant trailer size. Is that because of a difference between Samba and AD?
I've tested with Microsoft and Samba servers and both of them use 60 for actual encryped data, and decrypting with 64 abviously doesn't work. In the same time QueryContextAttributesA(SECPKG_ATTR_SIZES) returns 64 for the connection.
I can find references for Windows changing from 12000 max token size to 48000 but not 48256. Where does it come from?
It comes from a test that calls QueryContextAttributesA(SECPKG_ATTR_SIZES) under Windows 10 for the Samba server.
Hmm, maybe it's not decrypting in-place? This used to work without querying the trailer size from the context.
Also note that your changes to sasl_decode() and sasl_encode() break NTLM.
Do you have a sample code that breaks that could be used for testing purposes?
I modified the wldap32 test to connect to an AD LDAP server and hacked the Kerberos dll to return an error so that Negotiate falls back to NTLM. Essentially it's this:
ldap_bind_sA( ld, NULL, (char *)&auth_id, LDAP_AUTH_NEGOTIATE );