I've got the following test case that performs quite a bit different between
v1.0.4 and v1.1.1. 

In the code below v1.0.4 returns an undefined object for both test
cases--which certainly seems like it would be the expected behavior (I mean
if an element doesn't exist, it shouldn't be returned.) Also, the statements
should for all intents and purposes return the exact same results.

However, in v1.1.1 I you get radically different results. The first case
returns the context element--which definitely seems wrong--and the second
test returns an empty element--which seems like an unexpected behavior.

Should v1.1.1 really be returning an empty element if no match exists?

-Dan

PS - Here's the test.

<ul id="idTest">
  <li class="selected2">
    Item 1
  </li>
  <li>
    Item 2
  </li>
  <li>
    Item 3
  </li>
</ul>

<input type="button" value="test" onclick="bug()" />

<script type="text/javascript">
function bug(){

  // v1.1.1 returns parent element (the UL--which is the context element)
  // v1.0.4 returns undefined
  alert(
    $("li.selected", document.getElementById("idTest"))[0]
  );

  // v1.1.1 returns an empty element
  // v1.0.4 returns undefined
  alert(
    $("#idTest li.selected")[0]
  );
}
</script>


_______________________________________________
jQuery mailing list
discuss@jquery.com
http://jquery.com/discuss/

Reply via email to