LINQ to SharePoint Issues and 100% CPU in w3wp.exe
This is a bit of a long story, but I felt it necessary to post the detail just in case someone has seen similar behaviour (or this is a known issue with parts). I've cross posted because there's a general ASP.Net theme as well as a SharePoint flavour. I've been helping a customer diagnose performance issues on their SharePoint 2010 site. Specifically, the w3wp.exe process goes to 100% CPU for for an hour or so. Before digging too deeply, I looked into some of the code. Now, I'm no SharePoint developer but some things were obvious and were clearly the result of System.NullReferenceException based ASP.Net warnings in the event log and other things scared me. Once was disposing of system created objects, particularly the SP RootWeb object because they were surrounded the reference by a using(). Another thing that did not sit well was the 'hack' necessary to get LINQ working on sites with anonymous access - this is a public Internet facing web site. Microsoft's own documentation states that one should use SPQuery in these circumstances (until they improve the functionality in Microsoft.SharePoint.Linq anyway). In particular, there's a need to elevate privileges because SPLists are secured but anonymous access is required. So my theory is they did software development by Google and found this: http://jcapka.blogspot.com/2010/05/making-linq-to-sharepoint-work-for.html And: http://blogs.msdn.com/b/sowmyancs/archive/2010/09/19/linq-to-sharepoint-and-runwithelevatedprivileges.aspx I wish they'd found the latter but, sadly, they chose the former code to implement as a *static *helper method. There's a very subtle but important difference - the former doesn't behave nicely when you get an exception in the anonymous delegate code passed in because it does not set the HttpContext back to the original value and you get a NullReferenceException in the logs. This was the cause of one of the issues. Obvious. Fixed that. Job done? Nope. *Lesson learnt* - don't develop by Google unless you know what the code is doing, line by line. I included this part because I think there's an issue with the *static helper* ,but I will come back to that. Oddly, the exception that is thrown while executing the anonymous delegate is An item with the same key has already been added.. Huh? All they were doing was enumerating the list. This has been reported by others. eg: http://social.technet.microsoft.com/Forums/en-US/sharepoint2010programming/thread/ff3c4212-0372-4088-972b-108c1eda2ee6/ No idea what's going on there (at this point). So I finally get access to the server when the issue is happening and managed to get a full memory dump of w3wp.exe. Immediately upon looking at the CLR stack of long running threads the problem was obvious but the cause is not so. System.Collections.Generic.Dictionary`2[[System.__Canon, mscorlib],[System.__Canon, mscorlib]].*FindEntry*(System.__Canon) System.Collections.Generic.Dictionary`2[[System.__Canon, mscorlib],[System.__Canon, mscorlib]].TryGetValue(System.__Canon, System.__Canon ByRef) Microsoft.SharePoint.Linq.Rules.ToEnumerableProcessor.GetEnumerableOperator(System.Reflection.MethodInfo) Microsoft.SharePoint.Linq.Rules.ToEnumerableProcessor.ConvertMethod(System.Linq.Expressions.MethodCallExpression, Context) Microsoft.SharePoint.Linq.Rules.ToEnumerableProcessor..cctorb__13(System.Linq.Expressions.MethodCallExpression, Context) Microsoft.SharePoint.Linq.Rules.GuardedRule`4+c__DisplayClass3[[System.__Canon, mscorlib],[System.__Canon, mscorlib],[Microsoft.SharePoint.Linq.Rules.ToEnumerableProcessor+Context, Microsoft.SharePoint.Linq],[System.__Canon, mscorlib]]..ctorb__1(System.__Canon, Context) Microsoft.SharePoint.Linq.Rules.SwitchRule`3[[System.__Canon, mscorlib],[Microsoft.SharePoint.Linq.Rules.ToEnumerableProcessor+Context, Microsoft.SharePoint.Linq],[System.__Canon, mscorlib]].Apply(System.__Canon, Context) Microsoft.SharePoint.Linq.Rules.ToEnumerableProcessor.Process(System.Linq.Expressions.Expression, System.Collections.Generic.List`1 ByRef) Microsoft.SharePoint.Linq.SPLinqProvider.Rewrite(System.Linq.Expressions.Expression, System.Collections.Generic.List`1 ByRef) Microsoft.SharePoint.Linq.SPLinqProvider.RewriteAndCompile[[System.__Canon, mscorlib]](System.Linq.Expressions.Expression, System.Collections.Generic.List`1 ByRef) Microsoft.SharePoint.Linq.LinqQuery`1[[System.__Canon, mscorlib]].GetEnumerator() Dictionary objects aren't thread safe (if you're modifying them) and you can read more about that from Tess' blog. http://blogs.msdn.com/b/tess/archive/2009/12/21/high-cpu-in-net-app-using-a-static-generic-dictionary.aspx But my customer's not using Dictionary objects, anywhere! So I pulled out Reflector to follow this through. Turns out that Microsoft.SharePoint.Linq is, and a *static *one at that. But why did this happen in the first place??? I can see from the CLR stack traces that 8 long running threads are all stuck doing the same think (above) and the
Kerberos Pt 2
Simon's recent issue with Kerberos reminded me of an issue I faced recently where Kerberos was failing. This is possibly a question to Ken, but anyone else might want to chip in. I do often refer people to Ken's multi-part blog on Kerberos. It must have been written when Ken had some spare time, before he started sparring with Silky. I digress. Imagine we have DEVSERVER with SSRS 2008 R2 and SharePoint 2010 installed. I believe: - SSRS was installed and configured to use service account domain\svcSSRS and listen on port 80, and - SP2010 was installed and configured to use service account domain\svcSP2010 and listen on port . Initially, the domain controllers were complaining about a duplicate SPN because HTTP/devserver was registered against both of the above service accounts. This may have been because the guys were mucking around with SPNs trying to make things work. So, to fix that I removed the SPN HTTP/devserver from domain\svcSP2010 and added the SPN as HTTP/devserver:. No more complaints about duplicate SPNs. Still didn't work though. Introduce another server. Lets call it PITA - she, sadly, runs BizTalk 2010. I NetMon'd with WireShark which showed that any process running on PITA still requested Kerberos tickets for HTTP/devserver no matter whether the ultimate request was for http://devserver:80 or http://devserver:. In fact, I found that most (all?) requests do not add the port number. SPNs support port numbers, clients don't request tickets with a port number? My suggestion was to create DNS A records for the two servers and add the respective SPN to each service account (I already knew one cannot use a CNAME as the underlying hostname will be used anyway). Have I not read something in the docs or is this a general gotcha that one should be aware of? -- *Richard Carde* E: rich...@carde.id.au
RE: Kerberos Pt 2
Richard, Typically, if you are running SSRS and SP2010 on the same boxes, they need to run the same service accounts for that very issue where two accounts can't use the same SPN's. Also, SharePoint creates sites on port 80, the site you might have configured could be the Administration port. Have you enabled delegation on these service accounts in Active Directory? You also need an FQDN SPN entry for each URL too. And yes, you can't use a CNAME DNS entry for Kerberos, it must be an A record. From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Richard Carde Sent: Wednesday, 23 November 2011 7:47 AM To: ozdotnet Subject: Kerberos Pt 2 Simon's recent issue with Kerberos reminded me of an issue I faced recently where Kerberos was failing. This is possibly a question to Ken, but anyone else might want to chip in. I do often refer people to Ken's multi-part blog on Kerberos. It must have been written when Ken had some spare time, before he started sparring with Silky. I digress. Imagine we have DEVSERVER with SSRS 2008 R2 and SharePoint 2010 installed. I believe: * SSRS was installed and configured to use service account domain\svcSSRS and listen on port 80, and * SP2010 was installed and configured to use service account domain\svcSP2010 and listen on port . Initially, the domain controllers were complaining about a duplicate SPN because HTTP/devserver was registered against both of the above service accounts. This may have been because the guys were mucking around with SPNs trying to make things work. So, to fix that I removed the SPN HTTP/devserver from domain\svcSP2010 and added the SPN as HTTP/devserver:. No more complaints about duplicate SPNs. Still didn't work though. Introduce another server. Lets call it PITA - she, sadly, runs BizTalk 2010. I NetMon'd with WireShark which showed that any process running on PITA still requested Kerberos tickets for HTTP/devserver no matter whether the ultimate request was for http://devserver:80 or http://devserver:. In fact, I found that most (all?) requests do not add the port number. SPNs support port numbers, clients don't request tickets with a port number? My suggestion was to create DNS A records for the two servers and add the respective SPN to each service account (I already knew one cannot use a CNAME as the underlying hostname will be used anyway). Have I not read something in the docs or is this a general gotcha that one should be aware of? -- Richard Carde E: rich...@carde.id.aumailto:rich...@carde.id.au
Re: [Friday OT] unstoppable force meets an immovable object,
See, i'm not buying that :) Risk matrix - Consequences vs Likelihood. Questions - Why are developers working with production grade data (customers info etc). Shouldn't that be partitioned off into a more secure locked down release area only. Developers working with Foo Jones is imho the counter pill to the for mentioned claim. Placing the developer pool in their own DMZ sandbox imho is also the way forward, so if they are compromised its contained and all data etc should be test data that doesn't include sensitive information. IP getting stolen? Theres a million ways to bypass a locked down machine to get the data in/out ..if someone were to expose the code base or documents it first is likely they are moving data outside the confines of the said PC and secondly are likely to screw up no matter how much Sys Admin nannying is in place. In all honesty, I think Sys Admins today really need to reign in their approach to making the zen-like perfectly secure network. Devs need more room to play in, so provide them with a sandbox to play in and look instead into ways of emulating the network solutions they are build for than just declaring SOE war. Having spent a few tours in GOVT, its like the Sys Admins are still reading their How to prevent virus attacks on Windows NT 4.0 playbooks. Didn't Suncorp recently adopt the bring your own pc to work philosophy? --- Regards, Scott Barnes http://www.riagenic.com On Sat, Nov 19, 2011 at 12:06 AM, Ken Schaefer k...@adopenstatic.com wrote: On the other hand, you just head over to the sysadmin lists and see the admins complaining about how much time is consumed supporting developers who get their machines compromised or otherwise borked. Putting unauthorised networks into an environment is a huge no-no in my book. Most developers do not have the skills or the knowledge to secure a network, let alone know what regulatory/audit requirements the business has. Then, if there is a compromise and corporate IP is stolen, customer information stolen etc. due to ingress via an unauthorized network, who is going to take the rap? -Original Message- From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Arjang Assadi Sent: Friday, 18 November 2011 5:00 PM To: ozDotNet Subject: Re: [Friday OT] unstoppable force meets an immovable object, On 18 November 2011 19:47, Les Hughes l...@datarev.com.au wrote: Get a rouge box on the network with VMWare and set up a shadow network. A wireless router can also help if the wired network is a little discriminatory. Fight the power! Brilliant! That's voice of a true programmer being an unstoppable force talking,
Re: [Friday OT] unstoppable force meets an immovable object,
On Wed, Nov 23, 2011 at 11:24 AM, Scott Barnes scott.bar...@gmail.com wrote: See, i'm not buying that :) Risk matrix - Consequences vs Likelihood. Questions - Why are developers working with production grade data (customers info etc). Because only production data is a large enough set to do certain testing. (speed, queries that return extremely large recordsets, etc) Shouldn't that be partitioned off into a more secure locked down release area only. Developers working with Foo Jones is imho the counter pill to the for mentioned claim. Placing the developer pool in their own DMZ sandbox imho is also the way forward, so if they are compromised its contained and all data etc should be test data that doesn't include sensitive information. Better, but still not good if you want to be testing many many connections/users to a large db. IP getting stolen? Theres a million ways to bypass a locked down machine to get the data in/out ..if someone were to expose the code base or documents it first is likely they are moving data outside the confines of the said PC and secondly are likely to screw up no matter how much Sys Admin nannying is in place. And testing for the overhead caused by nannying is useful, too. -- Meski http://courteous.ly/aAOZcv Going to Starbucks for coffee is like going to prison for sex. Sure, you'll get it, but it's going to be rough - Adam Hills
RE: Client /Server alternative for TCP
I would go with RCF, most of the complexities are abstracted away so it's pretty easy, and plus it is free. http://www.deltavsoft.com/ From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Jano Petras Sent: Tuesday, 22 November 2011 6:53 PM To: ozDotNet Subject: Re: Client /Server alternative for TCP Well, hosting you application under IIS would get rid of worries regarding listeners and connections, if your system needs to follow regular REST request/response pattern (that is, you don't need to keep the connection open). WCF is an option here as it supports REST, or - just plain ASP.NET or MVC with some good REST library if you don't need all the bells and whistles WCF provides. My 2 cents, jano On 18 November 2011 07:05, Anthony Mayan ifum...@gmail.com wrote: Currently looking at building a client server app but thinking whether we should use TCP. Could we we use rest service instead or another method? Want to avoid creating a multi-thread tcp listener to handle all the connections...anyone have an suggestions? Anthony
EF4 metadata loading
Folks, this is just a heads-up about a gotcha with Entity Framework 4. Both of my EF4 books warn you that if you want to use the MetadataWorkspace class to inspect the model, then you cannot guarantee that the metadata has been loaded and you might throw. Their workaround is to make a call like context.MyTable.ToTraceString() before you start, optionally using a TryXXX method around it if you want to be fancy. I haven't seen any warnings of a similar problem when you run an ESQL query. Yesterday I had a simple query like the following throwing and telling me that CustomerID is not a member of Customer. I thought I had a missing namespace or syntax error or whatever, and after terrible suffering I realised that there was actually nothing wrong with the query and adding the highlighted line made it work. The error message was completely misleading. using (var context = new TestContext()) { Context.Customer.ToTraceString(); FIX var query = context.CreateQueryDbDataRecord(SELECT c.CustomerID, c.Name FROM Customer AS c); foreach (var rec in query) { Trace({0} {1}, rec.GetInt32(0), rec.GetString(1)); } } This is a shocking irritation. The equivalent LINQ query works fine, but the ESQL fails. This means I have to put guard code before every ESQL query I run (or try to prefer using LINQ when possible). Greg
RE: Client /Server alternative for TCP
Want to avoid creating a multi-thread tcp listener to handle all the connections...anyone have an suggestions? Assuming you want duplex communication ... if you plan to run inside the LAN where Remoting can open callback channels, the Remoting is old but reliable. A singleton server handles incoming requests on different threads automatically. I've used this technique in a few apps and it's easy to code (once you get over the initial hump). It's not scalable outside the LAN of course, and to overcome that we had to buy Genuine Channels so duplex communication happens on a single port. These days I would use WCF over TCP with callbacks. If you just want a dumb one-way service then I'd use basicHttpBinding WCF. If you want something dumb and open then I'd use ASMX or REST. Greg