Ok, I have almost all the ldap-api-model tests passing now, with the new prepareString and the new Vale, and the refactored normalizers.
the next step is to review teh filter parsing, which is problematic when we don't have a SchemaManager. There is a RFC (https://tools.ietf.org/search/rfc4515) that describes how to represent a Filter as a String, and obviously how we should convert a String back to a filter before transmitting it. The whole process is like : client network server "<filter>" --(encode)--> transmit --(decode)--> process Here, we use a Value to store the Filter, which is a problem on the client when we don't have a SchemaManager, becuase then if we find an escaped char in an Assertion value, like : "(cn=ACME\28tm\29)" (which is the encoding for "cn=ACME(tm)"), then the filter parser decides that the assertion value is a binary value. The consequence is that doing something like : SimpleNode<?> node = ( SimpleNode<?> ) FilterParser.parse( null, "(ou=ACME\28tm\29)" ); assertEquals( "ACME(tm)", node.getValue().getValue() ); will simply fail (getValue() always returns null when the value is binary). At this point, I have no clear solution. The thing is that it's not really important to be able to distinguish between String and Bianry values after having parsed the filter, if we aren't schema aware. It's probably enough to just store the value as a string no matter what, up to the user to do a getBytes() if the Value is supposed to be binary. The risk is that in some corner cases, storing a pure binay value into a String will lose some data. If anyone has a better idea...