RE: Cosmos DB

2018-02-06 Thread Paul Glavich
>> * Was your client on your desktop or up in the cloud in the same region as 
>> the database?

All in azure same region.

 

>> * Are you using a different Guid as the partition key for each document? 
>> You're supposed to be more "coarse" and group large groups of documents into 
>> partitions to improve large query performance. I'm not sure how this would 
>> affect your bulk insert tests.

Yes, diff Guid. We spoke with the CosmosDb team and they were all for Guids as 
the partition key for a lot of cases. We spent a lot of time talking with 
Microsoft on this within the bounds of the project. Guid’s are really good for 
distribution and make partitioning really effective. Yes there is a grouping 
effect but there is a massive risk of grouping too much which causes excessive 
RU usage on a single partition. With 20,000 RU’s for a collection, if you have 
10 partitions, the maximum you can get per partition is 2,000 RU’s so if you 
group a lot of documents together on a single partition, you get really small 
throughput because you are limited to your partition. As a general rule of 
thumb, the broader the range, the better for overall throughput. Enabling cross 
partition queries satisfies most requests in a well performing way.

 

In this instance, I didn’t want to limit by partition in anyway and simply 
wanted impose the same distribution offsets for every test so that it was at 
least consistent.

 

*   Glav

 

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Wednesday, 7 February 2018 12:25 PM
To: ozDotNet 
Subject: Re: Cosmos DB

 

Only just saw this. I have worked with a designed a solution using CosmosDb 
which is currently in production. I actually just released a blog post on 
client performance related to CosmosDb here: 
https://weblogs.asp.net/pglavich/cosmosdb-and-client-performance

 

I just read it all, well done. It does concentrate on bulk insert speed, which 
may not be a concern for many products. It's interesting to see which "walls" 
you hit along the way. Some notes:

 

* Was your client on your desktop or up in the cloud in the same region as the 
database?

 

* Are you using a different Guid as the partition key for each document? You're 
supposed to be more "coarse" and group large groups of documents into 
partitions to improve large query performance. I'm not sure how this would 
affect your bulk insert tests.

 

* (nitpicking) When discussing performance, don't say "significant impact" or 
"greater impact", use unambiguous expressions like "improve" or "degrade".

 

 As to your question, I haven’t used it personally, but I believe the Table API 
over CosmosDb supports bulk operations ( 
https://docs.microsoft.com/en-us/azure/cosmos-db/table-support ) as it is the 
same as the general Windows Azure storage API which supports bulk.

 

I'm not using Tables as the underlying storage, so I can't use the batch 
operations. For Cosmos DB SQL API the only mention of batch or transaction 
operations is related to JavaScript procs, but I haven't researched this yet. 
If you have to register server-side JavaScript for this purpose, then I'm quite 
irritated, not just because of my JS bias, but because it's a weird language 
mix.

 

I also have a (long) blog article on how moving an app suite from SQL Server to 
Cosmos DB produced miraculous results 

 

https://gfkeogh.blogspot.com.au/2018/01/collections-database-history.html

 

Greg

 



RE: Cosmos DB

2018-02-06 Thread Paul Glavich
Sorry for the delay,

 

Only just saw this. I have worked with a designed a solution using CosmosDb 
which is currently in production. I actually just released a blog post on 
client performance related to CosmosDb here: 
https://weblogs.asp.net/pglavich/cosmosdb-and-client-performance

 

As to your question, I haven’t used it personally, but I believe the Table API 
over CosmosDb supports bulk operations ( 
https://docs.microsoft.com/en-us/azure/cosmos-db/table-support ) as it is the 
same as the general Windows Azure storage API which supports bulk.

 

*   Glav

 

 

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Friday, 12 January 2018 12:06 PM
To: ozDotNet 
Subject: Cosmos DB

 

A quick Friday query ... Is anyone here using Cosmos DB in anger? I only ask 
because I just returned to experiment with it for the first time since it 
changed name to Cosmos. I actually quite like it! The .NET SDK wraps reasonably 
sensible classes over the API and I had a demo program up and running quickly.

 

I notice there are no "batch" operations for bulk inserts like we have in Table 
Storage, and I'm not writing JavaScript procs. It was unclear how to mix 
different document types in the same collection and process them separately 
(using a bool flag property in each type is to the way to go). There is little 
guidance about how to use the unique string Id. I haven't figured out how to 
add attachment links yet. But despite some oddities about the way it works, I'm 
really impressed with the performance, the magical indexing and ease-of-use.

 

I'm thinking of moving my "music database" out of SQL Server into Cosmos and 
writing a fresh UI. This is my hobby project that has been going since 1992 and 
has never been completed because a new technology or fad comes along annually 
and forces an Xmas holiday rewrite. I realise now that cataloguing items like 
music, videos and books has always produced rather strict and complicated 
normalised SQL database tables. These sorts of items feel much more natural 
when converted to de-normalised documents.

 

Greg K



RE: Used Azure SQL DB? Why or why not?

2017-02-09 Thread Paul Glavich
Hey Greg,

 

In the scenarios re: multiple DB to single with schema, it was mostly where one 
application was using one database for logs, one for main data, another one for 
something else, not really multi-tenanted. Also, it certainly isn’t a big 
issue, but in a lot of dev shops where multiple databases live, there is an 
assumption that you can do just the same in azure, and you sort of can but 
there are limitations. For example, if you have Db migration scripts (or system 
in place) that will ‘use [databasename]’, then this clearly wont work in azure. 
So then you have to have separate connections to each database (if you continue 
to use this convention), and manage it differently, so just a consideration.

 

Also by saying ‘performance aside’, merely highlighting this multi-db vs 
single-db-multi-schema is something to think about, without introducing a 
lengthier performance talk. However, I would like to know more about the 
pitfalls (from a perf perspective) of multiple schemas.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Wednesday, 1 February 2017 11:01 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: Used Azure SQL DB? Why or why not?

 

Hi Glav,

 

One caught my eye there. Can’t admit to like using schemas for tenants. I live 
in a world where “performance aside” isn’t an aside. That has way to much 
impact on query plans, caching, memory, etc. for my liking.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax

SQL Down Under | Web:  <http://www.sqldownunder.com/> www.sqldownunder.com | 
<http://greglow.me/> http://greglow.me

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Paul Glavich
Sent: Saturday, 28 January 2017 3:58 PM
To: 'ozDotNet' <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: RE: Used Azure SQL DB? Why or why not?

 

Hey Greg,

 

Use it all the time, and am working with a customer which is a greenfield 
project.

 

Things to note:

* Getting a good idea of performance related to what size/number of 
DTU’s. Initially, it is a pretty rough guess at best of times. Also, assuming 
all the queries written against it are good (which often is not the case) makes 
it harder to properly estimate. Over time and with adequate testing this issue 
becomes less though.

* Retry with exponential fall off pattern. EF has a strategy to do this 
BUT doesn’t support transactions. Want to use a transaction? Then disable the 
retry/fall off policy and do your own. Can use something like Polly also to do 
this but it is an extra.

* Syncing data between azure sql and an on premise sql. There are 
options but I think SQL Azure data sync is mostly it. If it doesn’t work well 
with that, well, make it up from there.

* Customer initially started using a central SQL Dev DB. Caused all 
sorts of pain. I created a set of migration scripts so that Db can be run 
locally, with migration scripts for SQL in Azure.

* Migrating thought process from multiple databases to a single Db with 
multiple schemas. Not that you can’t use multiple databases, but it is mostly 
easier (especially for migration scripts) to operate on one DB (performance 
aside).

 

Probably a few others, but that is a brain dump for now.

Also, I will be seeing you at ignite as I got asked to do a preso only recently.

 

See you there :)

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Saturday, 28 January 2017 1:20 PM
To: ozDotNet <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: Used Azure SQL DB? Why or why not?

 

To my developer buddies: I'm preparing a session for Ignite where I'm 
discussing using Azure SQL DB for greenfield (new) applications. Would love to 
hear opinions on if you've used it, and what you found/learned, and if you 
haven't used it, what stopped you ?

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax

SQL Down Under | Web:  <http://www.sqldownunder.com/> www.sqldownunder.com | 
<http://greglow.me/> http://greglow.me

 



RE: WebApi all 404

2017-01-17 Thread Paul Glavich
At a guess I’d say it is routing and perhaps virtual directories.

 

Things to try:

* Write a delegating handler that just returns a hardcoded response – 
see if it works. If yes, you know it is getting through and its probably 
routing?

* Change your routing entries :) 

* Add some logging to a global action filter – log lots of shit to see 
what it is trying to get through.

 

Really just guessing though as I haven’t seen this behaviour.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Tuesday, 17 January 2017 1:31 PM
To: ozDotNet 
Subject: WebApi all 404

 

Folks, I just installed a WebApi app on a GoDaddy server and every request 
produces 404. The same app on nearly identical servers all work okay. I've been 
searching and fiddling for over 2 hours now with no progress at all. Every 
sensible suggestion I've found in web searches is useless ... pool settings, 
IIS verbs, framework versions, regiis, etc. A static htm file in the app root 
is visible, so it's something about the app, not the dns, permissions, site, 
binding, etc.

 

Has anyone experienced this and can remember what the trick is?

 

What sh*ts me is that I'm sure this happened to me early last year, and it took 
me hours to get it working, but I can't remember what I did. This time I'll 
make a blog post.

 

Greg K



RE: [OT] node.js and express

2016-11-29 Thread Paul Glavich
Well as I said, context.

 

What you have listed are options. Assess, then make a call.

 

However, my dependency chain is calling. Need to make it longer…. :)

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Wednesday, 30 November 2016 9:26 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: [OT] node.js and express

 

ES6

Typescript
AngularJS 1.5
Angular2
Aurelia
Polymer
YARN
NPM
react
grunt
gulp

 

This list extracted from Glav's message should be a hint that JS is on the 
fritz, and it's the tip of the iceberg -- Greg K

 

On 30 November 2016 at 09:13, Paul Glavich <subscripti...@theglavs.com 
<mailto:subscripti...@theglavs.com> > wrote:

It depends :)

 

However in an attempt to answer, which usually requires a lot more context and 
thought, here we go:

 

* I’d choose ES6/Typescript at a minimum. ES6 imports/modules/classes 
are pretty handy and good to separate out your logic. Typescript is also good 
for structure/”pretend static-ness” and helps with catching errors but is not 
to everyones flavour.

* I would then consider the following:

o   AngularJS 1.5

*  I’d consider this because it is well known (not bleeding edge) and can be 
easily packaged with no dependencies, thus reducing risk.

o   After consultation with whatever team is working on and assessment of their 
JS skillset, I would also consider:

*  Angular2/Aurelia/Polymer (I personally like Aurelia but that is purely 
personal)

* This would be dependent on the teams skill level and application 
requirements. A proficient javascript team can overcome any JS limitation or 
issue, irrespective of framework

* In addition, I’d look at using something like YARN (vs NPM) as a 
package manager to reduce issues with package inconsistency.

*  Note: I didn’t mention react simply because I don’t like it. No technical 
reason – it just is fugly :). In addition, it went from v0.14.8 to v15.0.0 in 
one release. Not sure what versioning world it came from, but that is not 
sequential, semantic or anything in between.

* As a caveat to this, in a current engagement I recommended the team 
use Angular 1.5 because

o   The team was familiar with it.

o   Team had no idea about grunt/gulp/ES6/Angular2 etc.

o   Had tight timeframes with no leeway to ramp up time.

o   I am only on the engagement 2 days a week and cannot effectively on ramp it 
in addition to working on CI/CD, architecture, team process etc.

o   So far, this is working very well and has been a good decision.

 

Hope that clarifies my thinking somewhat.

 

-  Glav 

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>  
[mailto:ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com> ] 
On Behalf Of Adrian Halid
Sent: Monday, 28 November 2016 10:17 AM
To: 'ozDotNet' <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: RE: [OT] node.js and express

 

If you were to start a new Enterprise Web Project which has the potential to be 
continually developed and enhanced over 5 to 10 years what web technology 
frameworks would you choose?

 

Regards

 

Adrian Halid 

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Paul Glavich
Sent: Monday, 28 November 2016 6:18 AM
To: 'ozDotNet' <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: RE: [OT] node.js and express

 

Yeah pretty much :)

 

I am a web guy through and through and I don’t mean to hack on people 
specifically, but as an industry it still manifests as real immaturity. I also 
don’t mean to suggest we don’t use and play with the new stuff either but the 
level of acceptance, particularly from people way smarter than me, is puzzling. 
It is real easy to be critical (like I have here…. ) so providing feedback on 
progress is pretty important. Doesn’t always work as momentum can carry it 
through (I am looking at you Angular2).

 

My rather rambling and opinionated point is that on a few engagements, I have 
recommended to not use the shiny new stuff, in favour of older but well known, 
and easier to maintain frameworks (after assessment of timeframes, people’s 
skillsets etc). Not using the latest in those cases has proven to be a boon, 
rather than an impediment. Sure it is lower on the coolness scale, and could be 
replaced with newer stuff later (obviously with a little rework) but it is 
working very well. It kind of suggests we perhaps invented part of the problem 
to solve in the first place. So I think play and assess the new stuff (like 
Greg has), but don’t blindly accept. Then engage and provide some form of 
influence or feedback so we don’t re-introduce the same mess in another 10 
years.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@o

RE: [OT] node.js and express

2016-11-29 Thread Paul Glavich
It depends :)

 

However in an attempt to answer, which usually requires a lot more context and 
thought, here we go:

 

* I’d choose ES6/Typescript at a minimum. ES6 imports/modules/classes 
are pretty handy and good to separate out your logic. Typescript is also good 
for structure/”pretend static-ness” and helps with catching errors but is not 
to everyones flavour.

* I would then consider the following:

o   AngularJS 1.5

*  I’d consider this because it is well known (not bleeding edge) and can be 
easily packaged with no dependencies, thus reducing risk.

o   After consultation with whatever team is working on and assessment of their 
JS skillset, I would also consider:

*  Angular2/Aurelia/Polymer (I personally like Aurelia but that is purely 
personal)

* This would be dependent on the teams skill level and application 
requirements. A proficient javascript team can overcome any JS limitation or 
issue, irrespective of framework

* In addition, I’d look at using something like YARN (vs NPM) as a 
package manager to reduce issues with package inconsistency.

*  Note: I didn’t mention react simply because I don’t like it. No technical 
reason – it just is fugly :). In addition, it went from v0.14.8 to v15.0.0 in 
one release. Not sure what versioning world it came from, but that is not 
sequential, semantic or anything in between.

* As a caveat to this, in a current engagement I recommended the team 
use Angular 1.5 because

o   The team was familiar with it.

o   Team had no idea about grunt/gulp/ES6/Angular2 etc.

o   Had tight timeframes with no leeway to ramp up time.

o   I am only on the engagement 2 days a week and cannot effectively on ramp it 
in addition to working on CI/CD, architecture, team process etc.

o   So far, this is working very well and has been a good decision.

 

Hope that clarifies my thinking somewhat.

 

-  Glav 

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Adrian Halid
Sent: Monday, 28 November 2016 10:17 AM
To: 'ozDotNet' <ozdotnet@ozdotnet.com>
Subject: RE: [OT] node.js and express

 

If you were to start a new Enterprise Web Project which has the potential to be 
continually developed and enhanced over 5 to 10 years what web technology 
frameworks would you choose?

 

Regards

 

Adrian Halid 

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Paul Glavich
Sent: Monday, 28 November 2016 6:18 AM
To: 'ozDotNet' <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: RE: [OT] node.js and express

 

Yeah pretty much :)

 

I am a web guy through and through and I don’t mean to hack on people 
specifically, but as an industry it still manifests as real immaturity. I also 
don’t mean to suggest we don’t use and play with the new stuff either but the 
level of acceptance, particularly from people way smarter than me, is puzzling. 
It is real easy to be critical (like I have here…. ) so providing feedback on 
progress is pretty important. Doesn’t always work as momentum can carry it 
through (I am looking at you Angular2).

 

My rather rambling and opinionated point is that on a few engagements, I have 
recommended to not use the shiny new stuff, in favour of older but well known, 
and easier to maintain frameworks (after assessment of timeframes, people’s 
skillsets etc). Not using the latest in those cases has proven to be a boon, 
rather than an impediment. Sure it is lower on the coolness scale, and could be 
replaced with newer stuff later (obviously with a little rework) but it is 
working very well. It kind of suggests we perhaps invented part of the problem 
to solve in the first place. So I think play and assess the new stuff (like 
Greg has), but don’t blindly accept. Then engage and provide some form of 
influence or feedback so we don’t re-introduce the same mess in another 10 
years.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Scott Barnes
Sent: Sunday, 27 November 2016 9:40 PM
To: ozDotNet <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: Re: [OT] node.js and express

 

So Paul, you're basically saying "The standard we walk past, is the standard we 
accept" hehe :)




---
Regards,
Scott Barnes
http://www.riagenic.com

 

On Sun, Nov 27, 2016 at 8:31 PM, Paul Glavich <subscripti...@theglavs.com 
<mailto:subscripti...@theglavs.com> > wrote:

Ahh the recurring thread about how immature the JS/Web dev community is and how 
hard it is to do anything “right”.

 

All I will say is we asked for it. If we didn’t ask for it, we accepted it. If 
we didn’t accept it, we assumed that the new was good and ran with it.

 

We make a whole lot of assumptions on server side tech and place a whole d

RE: [OT] node.js and express

2016-11-27 Thread Paul Glavich
Yeah pretty much :)

 

I am a web guy through and through and I don’t mean to hack on people 
specifically, but as an industry it still manifests as real immaturity. I also 
don’t mean to suggest we don’t use and play with the new stuff either but the 
level of acceptance, particularly from people way smarter than me, is puzzling. 
It is real easy to be critical (like I have here…. ) so providing feedback on 
progress is pretty important. Doesn’t always work as momentum can carry it 
through (I am looking at you Angular2).

 

My rather rambling and opinionated point is that on a few engagements, I have 
recommended to not use the shiny new stuff, in favour of older but well known, 
and easier to maintain frameworks (after assessment of timeframes, people’s 
skillsets etc). Not using the latest in those cases has proven to be a boon, 
rather than an impediment. Sure it is lower on the coolness scale, and could be 
replaced with newer stuff later (obviously with a little rework) but it is 
working very well. It kind of suggests we perhaps invented part of the problem 
to solve in the first place. So I think play and assess the new stuff (like 
Greg has), but don’t blindly accept. Then engage and provide some form of 
influence or feedback so we don’t re-introduce the same mess in another 10 
years.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Scott Barnes
Sent: Sunday, 27 November 2016 9:40 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: [OT] node.js and express

 

So Paul, you're basically saying "The standard we walk past, is the standard we 
accept" hehe :)




---
Regards,
Scott Barnes
http://www.riagenic.com

 

On Sun, Nov 27, 2016 at 8:31 PM, Paul Glavich <subscripti...@theglavs.com 
<mailto:subscripti...@theglavs.com> > wrote:

Ahh the recurring thread about how immature the JS/Web dev community is and how 
hard it is to do anything “right”.

 

All I will say is we asked for it. If we didn’t ask for it, we accepted it. If 
we didn’t accept it, we assumed that the new was good and ran with it.

 

We make a whole lot of assumptions on server side tech and place a whole deal 
of constraints and measures on it.

 

Not so on client dev. Massive external dependencies are abhorrent on server 
side. On client side, they are celebrated (to cite an example).

 

We built it and promoted it. It is on us, not the vendors.

 

I *think* Greg Keogh started with this with some investigations on how hard it 
was to implement something using framework/technique X. Cool. You have learnt 
what not to do, not how to do something with the latest tech just because Scott 
Hanselmann mentioned it.

 

Caveat: I am an old bastard. This argument is not new, but it is compounded by 
an increase in velocity in general.

 

-  Glav

 

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>  
[mailto:ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com> ] 
On Behalf Of Nathan Schultz
Sent: Friday, 25 November 2016 4:13 PM


To: ozDotNet <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: Re: [OT] node.js and express

 

@Greg, the last version of LightSwitch you could choose either HTML5 or 
SilverLight on the client. But you're right... it's no longer an option.

 

On 25 November 2016 at 11:25, Greg Low (罗格雷格博士) <g...@greglow.com 
<mailto:g...@greglow.com> > wrote:

Yep, Lightswitch is dead. It was Silverlight based.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL  <tel:%281300%20775%20775> (1300 775 775) office |  
<tel:%2B61%20419201410> +61 419201410 mobile│  <tel:%2B61%203%208676%204913> 
+61 3 8676 4913 fax 

SQL Down Under | Web:  <http://www.sqldownunder.com/> www.sqldownunder.com |  
<http://greglow.me/> http://greglow.me

 

From:  <mailto:ozdotnet-boun...@ozdotnet.com> ozdotnet-boun...@ozdotnet.com 
[mailto: <mailto:ozdotnet-boun...@ozdotnet.com> ozdotnet-boun...@ozdotnet.com] 
On Behalf Of Nathan Schultz
Sent: Friday, 25 November 2016 2:20 PM


To: ozDotNet < <mailto:ozdotnet@ozdotnet.com> ozdotnet@ozdotnet.com>
Subject: Re: [OT] node.js and express

 

Arguably, a productive web-based RAD tool is exactly the sort of niche that 
Microsoft LightSwitch was trying to fill (although I'm pretty certain it's now 
dead). As I said earlier, we use OutSystems here, and I believe it's an area 
that Aurelia.IO and other vendors are growing into as well.

 

On 25 November 2016 at 11:00, Greg Low (罗格雷格博士) < <mailto:g...@greglow.com> 
g...@greglow.com> wrote:

But that's exactly the point Scott. Why have we gone so far backwards in 
productivity?

Regards,

Greg

Dr Greg Low
1300SQLSQL  <tel:%281300%20775%20775> (1300 775 775) office |  
<tel:%2B61%20419201410> +61 419201410 mobile│  <tel:%2B61%203%208676%204913> 
+61 3 8676 4913 fax
SQ

RE: [OT] node.js and express

2016-11-27 Thread Paul Glavich
Ahh the recurring thread about how immature the JS/Web dev community is and how 
hard it is to do anything “right”.

 

All I will say is we asked for it. If we didn’t ask for it, we accepted it. If 
we didn’t accept it, we assumed that the new was good and ran with it.

 

We make a whole lot of assumptions on server side tech and place a whole deal 
of constraints and measures on it.

 

Not so on client dev. Massive external dependencies are abhorrent on server 
side. On client side, they are celebrated (to cite an example).

 

We built it and promoted it. It is on us, not the vendors.

 

I *think* Greg Keogh started with this with some investigations on how hard it 
was to implement something using framework/technique X. Cool. You have learnt 
what not to do, not how to do something with the latest tech just because Scott 
Hanselmann mentioned it.

 

Caveat: I am an old bastard. This argument is not new, but it is compounded by 
an increase in velocity in general.

 

-  Glav

 

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Nathan Schultz
Sent: Friday, 25 November 2016 4:13 PM
To: ozDotNet 
Subject: Re: [OT] node.js and express

 

@Greg, the last version of LightSwitch you could choose either HTML5 or 
SilverLight on the client. But you're right... it's no longer an option.

 

On 25 November 2016 at 11:25, Greg Low (罗格雷格博士)  > wrote:

Yep, Lightswitch is dead. It was Silverlight based.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410   
mobile│ +61 3 8676 4913   fax 

SQL Down Under | Web:   www.sqldownunder.com |  
 http://greglow.me

 

From: ozdotnet-boun...@ozdotnet.com   
[mailto:ozdotnet-boun...@ozdotnet.com  ] 
On Behalf Of Nathan Schultz
Sent: Friday, 25 November 2016 2:20 PM


To: ozDotNet  >
Subject: Re: [OT] node.js and express

 

Arguably, a productive web-based RAD tool is exactly the sort of niche that 
Microsoft LightSwitch was trying to fill (although I'm pretty certain it's now 
dead). As I said earlier, we use OutSystems here, and I believe it's an area 
that Aurelia.IO and other vendors are growing into as well.

 

On 25 November 2016 at 11:00, Greg Low (罗格雷格博士)  > wrote:

But that's exactly the point Scott. Why have we gone so far backwards in 
productivity?

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410   
mobile│ +61 3 8676 4913   fax
SQL Down Under | Web: www.sqldownunder.com  

 

  _  

From: ozdotnet-boun...@ozdotnet.com   
 > on 
behalf of Scott Barnes  >
Sent: Friday, November 25, 2016 12:09:38 PM
To: ozDotNet


Subject: Re: [OT] node.js and express

 

"It Depends" on what tool you're looking at. If all you're doing is staring at 
Visual Studio and that's it and wondering why the world is so hard to develop 
for then that's not a realistic outcome, as despite all the OSS rhetoric, 
Microsoft is still preoccupied with Windows first class citizen approach to 
roadmaps. They'll dip their toes in other platforms but until revenue models 
change, tool -> windows. The rest will just be additive biproduct / bonus 
rounds outside that. 

 

Products like Unity3D and Xamarin were the answer to that question but not as 
"drag-n-drop tab dot ship" as Winforms of old.. those days are well behind us 
now.

 

 

 

 

 




---
Regards,
Scott Barnes
http://www.riagenic.com

 

On Fri, Nov 25, 2016 at 9:54 AM, Greg Low (罗格雷格博士)  > wrote:

So it then comes back to tooling again.

 

Why can’t I build an app with the ease of a winform app and have it deployed in 
the current environments? Surely the app framework should fix the underlying 
mess and let me code to a uniform clean model.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775  ) office | +61 419201410 
  mobile│ +61 3 8676 4913   
fax 

SQL Down Under | Web:   www.sqldownunder.com |  
 http://greglow.me

 

From: ozdotnet-boun...@ozdotnet.com   
[mailto:ozdotnet-boun...@ozdotnet.com  ] 
On Behalf Of Ken Schaefer
Sent: Thursday, 24 November 2016 9:41 PM
To: ozDotNet  >
Subject: RE: [OT] node.js and express

 


RE: [OT] Angular certification

2016-10-20 Thread Paul Glavich
Gave up on Ang2. I don’t like the direction and the Release process was silly. 
Aurelia I find much much better. 

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Nick Randolph
Sent: Thursday, 13 October 2016 3:11 PM
To: ozDotNet 
Subject: RE: [OT] Angular certification

 

We’re just in process of publish an app written in Angular 2, so yes, 
definitely taking it seriously. A lot of pain upgrading from Beta/RC to RTM 
(it’s like they didn’t understand what Beta/RC means). 

 

Nick Randolph | Built to Roam Pty Ltd | Microsoft MVP – Windows Platform 
Development | +61 412 413 425 | @thenickrandolph | skype:nick_randolph
The information contained in this email is confidential. If you are not the 
intended recipient, you may not disclose or use the information in this email 
in any way. Built to Roam Pty Ltd does not guarantee the integrity of any 
emails or attached files. The views or opinions expressed are the author's own 
and may not reflect the views or opinions of Built to Roam Pty Ltd.

 

From: ozdotnet-boun...@ozdotnet.com   
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Tom P
Sent: Thursday, 13 October 2016 3:05 PM
To: ozDotNet  >
Subject: Re: [OT] Angular certification

 

Angular 2 is entirely redone. TypeScript makes it also bearable




Thanks

Tom

 

On 13 October 2016 at 14:54, Greg Keogh  > wrote:

Are there any Angular certifications you guys can recommend that may be taken 
seriously?

 

Can anyone even take Angular seriously?!

 

I thought it was already abandoned by the author who went off to write a new 
competing BlahJS, or is a new group completely rewriting it to Angular 2, or 
something like that? They all blur together.

 

GK

 



RE: [OT] Angular certification

2016-10-20 Thread Paul Glavich
We have heaps of people using that and have some large Aussie orgs using it to 
great effect. I personally haven’t though.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Nick Randolph
Sent: Friday, 14 October 2016 1:32 PM
To: ozDotNet 
Subject: RE: [OT] Angular certification

 

Anyone played with and/or used   
https://www.polymer-project.org/1.0/ 

 

Nick Randolph | Built to Roam Pty Ltd | Microsoft MVP – Windows Platform 
Development | +61 412 413 425 | @thenickrandolph | skype:nick_randolph
The information contained in this email is confidential. If you are not the 
intended recipient, you may not disclose or use the information in this email 
in any way. Built to Roam Pty Ltd does not guarantee the integrity of any 
emails or attached files. The views or opinions expressed are the author's own 
and may not reflect the views or opinions of Built to Roam Pty Ltd.

 

From: ozdotnet-boun...@ozdotnet.com   
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of DotNet Dude
Sent: Friday, 14 October 2016 11:32 AM
To: ozDotNet  >
Subject: Re: [OT] Angular certification

 

Lol nice article.

 

I can count at least 1 interview per week I do where the candidate knows lots 
of JS libraries but can't write a simple algorithm in ANY language. Web dev in 
2016.

On Friday, 14 October 2016, Wallace Turner  > wrote:

slightly topical link

 

 

How it feels to learn JavaScript in 2016 

 

 

 

 

On Thu, Oct 13, 2016 at 12:14 PM, Craig van Nieuwkerk  > wrote:

RC is the new Alpha.

 

On Thu, Oct 13, 2016 at 3:10 PM, Nick Randolph  > wrote:

We’re just in process of publish an app written in Angular 2, so yes, 
definitely taking it seriously. A lot of pain upgrading from Beta/RC to RTM 
(it’s like they didn’t understand what Beta/RC means). 

 

Nick Randolph | Built to Roam Pty Ltd | Microsoft MVP – Windows Platform 
Development | +61 412 413 425   | @thenickrandolph 
| skype:nick_randolph
The information contained in this email is confidential. If you are not the 
intended recipient, you may not disclose or use the information in this email 
in any way. Built to Roam Pty Ltd does not guarantee the integrity of any 
emails or attached files. The views or opinions expressed are the author's own 
and may not reflect the views or opinions of Built to Roam Pty Ltd.

 

From: ozdotnet-boun...@ozdotnet.com 
  
[mailto:ozdotnet-boun...@ozdotnet.com 
 ] On Behalf Of 
Tom P
Sent: Thursday, 13 October 2016 3:05 PM
To: ozDotNet  >
Subject: Re: [OT] Angular certification

 

Angular 2 is entirely redone. TypeScript makes it also bearable




Thanks

Tom

 

On 13 October 2016 at 14:54, Greg Keogh  > wrote:

Are there any Angular certifications you guys can recommend that may be taken 
seriously?

 

Can anyone even take Angular seriously?!

 

I thought it was already abandoned by the author who went off to write a new 
competing BlahJS, or is a new group completely rewriting it to Angular 2, or 
something like that? They all blur together.

 

GK

 

 

 



RE: Entity Framework - the lay of the land

2016-09-18 Thread Paul Glavich
Cache invalidation can be hard in tight race conditions and a few others. There 
are many instances where it can be very easy based on use cases and data needs. 
I have used it to great effect for many years.

 

Like you mentioned, do not write off because it can be hard. Kinda like 
designing and implementing solutions :)

 

Note: I have never used ORM caching functionlity and probably never will.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Monday, 19 September 2016 10:38 AM
To: ozDotNet 
Subject: Re: Entity Framework - the lay of the land

 

I had an argument internally that caching was good, with the alternate side 
saying that “cache invalidation” was hard so they never use it.

 

I think it is "hard" but don't write it off completely. Search for "second 
level cache" and you'll see it's not that trivial to use properly. Some ORMs 
have it as an optional feature. You've got to consider what to cache, eviction 
or expiry policy, concurrency, capacity, etc. I implemented simple caching in a 
server app a long time ago, then about year later I put performance counters 
into the code and discovered that in live use the cache was usually going empty 
before it was accessed, so it was mostly ineffective. Luckily I could tweak it 
into working. So caching is great, but be careful -- GK



RE: Entity Framework - the lay of the land

2016-09-18 Thread Paul Glavich
>> Finally, caching is your friend. I'm called in all the time to help with 
>> concurrency and scale issues. The #1 way to get a DB to scale is to stop 
>> talking to it in the first place.

Boom. I have been advocating that for years. That line is almost exactly the 
one I used in my performance book too.

 

I had an argument internally that caching was good, with the alternate side 
saying that “cache invalidation” was hard so they never use it.

 

Good to know my opinion is officially endorsed by “The good doctor” :)

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Low (??)
Sent: Sunday, 18 September 2016 11:47 AM
To: ozDotNet 
Subject: Re: Entity Framework - the lay of the land

 

Three other key aspects of this:

 

If your table design matches your object design, at least one of them is a poor 
design (again I'm talking about serious apps). Yet most ORMs start with this 
assumption.

 

If you don't bypass your normal object access paths for high speed operations, 
you'll have serious perf issues. It might seem clean to load up a set of 
entities to filter and paginate them on each call. People who do that keep 
generating work for people like me though.

 

Finally, caching is your friend. I'm called in all the time to help with 
concurrency and scale issues. The #1 way to get a DB to scale is to stop 
talking to it in the first place.

 

Regards,

Greg

Dr Greg Low
1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax
SQL Down Under | Web: www.sqldownunder.com  

 

_
From: Greg Low (罗格雷格博士)  >
Sent: Saturday, September 17, 2016 11:11 AM
Subject: RE: Entity Framework - the lay of the land
To: ozDotNet  >





And if you have two days free on 28th/29th of this month,  come 
and spend those days on starting to get your head around query performance:  
 
http://www.sqldownunder.com/Training/Courses/3   (And sorry, 
Melbourne only this year. Might get time mid-next year for a Sydney one).

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax

SQL Down Under| Web:  www.sqldownunder.com | 
 http://greglow.me

 

From: Greg Low (罗格雷格博士)
Sent: Saturday, 17 September 2016 11:04 AM
To: ozDotNet  >
Subject: RE: Entity Framework - the lay of the land

 

Hey Dave and all,

 

“The great” -> hardly but thanks Dave.

 

Look, my issues with many of these ORMs are many. Unfortunately, I spend my 
life on the back end of trying to deal with the messes involved. The following 
are the key issues that I see:

 

Potentially horrid performance

 

I’ve been on the back end of this all the time. There are several reasons. One 
is that the frameworks generate horrid code to start with, the second is that 
they are typically quite resistant to improvement, the third is that they tend 
to encourage processing with far too much data movement.

 

I regularly end up in software houses with major issues that they don’t know 
how to solve. As an example, I was at a start-up software house recently. They 
had had a team of 10 developers building an application for the last four 
years. The business owner said that if it would support 1000 concurrent users, 
they would have a viable business. 5000 would make a good business. 500 they 
might survive. They had their first serious performance test two weeks before 
they had to show the investors. It fell over with 9 concurrent users. The 
management (and in this case the devs too) were panic-stricken.

 

Another recent example was a software house that had to deliver an app to a 
government department. They were already 4 weeks overdue and couldn’t get it 
out of UAT. They wanted a silver bullet. That’s not the point to then be 
discussing their architectural decisions yet they were the issue.

 

I was in a large financial in Sydney a while back. They were in the middle of 
removing the ORM that they’d chosen out of their app because try as they might, 
they couldn’t get anywhere near the required performance numbers. Why had they 
called me in? Because before they wrote off 8 months’ work for 240 developers, 
the management wanted another opinion.

 

Just yesterday I was working on a background processing job that processes a 
certain type of share trades in a UK-based financial service organisation. On a 
system with 48 processors, and 1.2 TB of memory, and 7 x 1 million UK pound 
20TB flash drive arrays, it ran for 48 minutes. During that time, it issued 550 
million SQL batches to be processed and almost nothing else would work well on 
the machine at the same time. The replacement job 

RE: Command and Query Responsibility Segregation Pattern (CQRS)

2016-07-17 Thread Paul Glavich
It would seem to be the case, CQRS and repository are not mutually exclusive 
patterns, far from it actually. They are quite often used together. I would say 
CQRS is  far broader pattern than the repository which is simply to abstract 
the data store mechanism whereas CQRS is a functionally more complex pattern. I 
would be curious how they are storing commands and interacting with the query 
engine though.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Nathan Schultz
Sent: Thursday, 14 July 2016 5:13 PM
To: ozDotNet 
Subject: Re: Command and Query Responsibility Segregation Pattern (CQRS)

 

Hi Tony,

 

Yeah, it seems strange to me too. 

 

Often CQRS is sometimes used in conjunction with Event Sourcing (i.e. an append 
only data-store). So maybe he's thinking of the Repository Pattern as a 
traditional CRUD interface, and it's that which they're not using?

 

Regards,

 

Nathan.

 

On 14 July 2016 at 14:03, Tom Rutter  > wrote:

Hey Tony, I too am confused by the developer's comment. My understanding is the 
same as yours it seems.

 

 

On Wed, Jul 13, 2016 at 8:12 PM, Tony Wright  > wrote:

Hi all,

 

I had a discussion the other day with an experienced developer who told me that 
"instead of using the repository pattern, they just use CQRS these days."

 

I am somewhat puzzled with that statement, because it is my understanding that 
the two are almost completely independent of each other.

 

In simple terms, CQRS is used to separate requests from responses, so data 
received from a database use different classes from the ones used to submit 
updates. e.g. PersonCreateInputDto, which might contain just the fields used to 
create a new person in the database, and PersonOutputDto, which might contain 
just the fields needed to display a list of Person records. You don't use the 
same object for both types of transaction, just the bare minimum in each.

 

Repository, on the other hand, is used for dependency injection. By changing 
the dependency provider, I can switch a set of runtime classes with a set of 
testing classes. The dependency provider injects the dependent objects that are 
desired at the time, which could be either runtime objects, or mock testing 
objects, so it is predominantly used to enable better testing.

 

I got the impression that the person was somehow using CQSR to perform their 
testing instead. Is there something that I'm missing here?

 

Regards,

Tony

 

 



RE: Web API HelpPage

2016-03-08 Thread Paul Glavich
We started using the WebAPI help package quite some time ago. We did some 
tweaks, it came out with an update, we updated, broke pretty much everything. 
We have since customised the crap out of it and are now pretty happy with it. 
There are some shortcomings but overall it is doing what we want and dev story 
is pretty clean. We absolutely advocate some manual content that augment the 
pure tech stuff tho.

 

You can see the result here https://api.saasu.com

 

We have made the call a little while ago to not worry about any future 
upgrades, or anything like that. It has enough customisation to not warrant the 
pain of breakage.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of DotNet Dude
Sent: Wednesday, 9 March 2016 10:52 AM
To: ozDotNet 
Subject: Re: Web API HelpPage

 

 

On Wed, Mar 9, 2016 at 10:27 AM, Greg Keogh  > wrote:

Folks, just a heads-up ... I was looking for a way of auto-generating Web API 
documentation and I found THIS 

  from 2013, and there are similar articles all over the place, all a bit old. 
The advice on getting it going works after you add the Nuget package and make 
the tweaks. However: 

*   I had to manually alter the package code to read and merge multiple 
xmldoc files.
*   I cannot get the Request Formats to say anything but "Sample not 
available".
*   I cannot get the Additional Information column to display anything 
because my entity classes are in a portable library and can't have attributes 
applied to them.
*   All standard xmldoc tags in your comments are stripped back to empty 
strings in the web API help, meaning you can't have both API help and 
Sandcastle help for the same code with nice formatting and links.
*   Instructions on how to style the API help no longer work and it's all 
dull black and white.

 

So for simple API help it works, but the moment you want to customise or 
enhance anything it all goes to hell. I've wasted up to 4 solid hours trying to 
workaround all of the irritations I described. It's like someone had a great 
idea with this, but coded it like a high school project.

 

 

lol sound like every project I've inherited

 

Perhaps there are better tools, but a quick look at Swagger hints that you have 
to write almost all of the documentation and samples manually as JSON.

 

GK

 



RE: Any opinions on the Dell XPS 15 laptops?

2016-02-09 Thread Paul Glavich
I was actually looking at picking one up. I really want a surface pro 4 or
surface book but the firmware problems, and mostly the exhorbitant price,
turn me away. In addition, the speed at which older models of surface
(namely 2 and 3) are simply ditched and no longer made (ie.
peripherals/replacements soon dry up) as soon as new models arrive means the
life of these units is pretty small.

 

The Dell XPS 15 looks really nice, as does the XPS 13. Both can be grabbed
with 16Gb of mem, great screen, touch, and good proc. Haven't played with
one personally though.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 3:19 PM
To: ozDotNet 
Subject: Any opinions on the Dell XPS 15 laptops?

 

? As per subject ?

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax


SQL Down Under | Web:   www.sqldownunder.com

 



RE: Any opinions on the Dell XPS 15 laptops?

2016-02-09 Thread Paul Glavich
>> starting to have some screen separation near the hinges

 

My current laptop has huge separation of the plastic that connects to the
hinges and screen. So much so I can pretty much poke a finger into it when
opening the lid and seeing visible electronic componentry shift around. It
is currently bound together with masking tape which helps, hence my need for
a new lappy :) 

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Greg Low (??)
Sent: Wednesday, 10 February 2016 8:12 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: RE: Any opinions on the Dell XPS 15 laptops?

 

We have some E7440's that have been excellent but are now starting to have
some screen separation near the hinges. (Bit surprising really). Otherwise,
love them. So, was thinking about the E7470's but now thinking XPS 15's. My
eyesight would appreciate the 15 inch screen, and the narrow bezel makes it
not much larger than the 14 inch units.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax


SQL Down Under | Web:  <http://www.sqldownunder.com/> www.sqldownunder.com

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Paul Glavich
Sent: Wednesday, 10 February 2016 8:06 AM
To: 'ozDotNet' <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: RE: Any opinions on the Dell XPS 15 laptops?

 

I was actually looking at picking one up. I really want a surface pro 4 or
surface book but the firmware problems, and mostly the exhorbitant price,
turn me away. In addition, the speed at which older models of surface
(namely 2 and 3) are simply ditched and no longer made (ie.
peripherals/replacements soon dry up) as soon as new models arrive means the
life of these units is pretty small.

 

The Dell XPS 15 looks really nice, as does the XPS 13. Both can be grabbed
with 16Gb of mem, great screen, touch, and good proc. Haven't played with
one personally though.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com <mailto:ozdotnet-boun...@ozdotnet.com>
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Low (??)
Sent: Tuesday, 9 February 2016 3:19 PM
To: ozDotNet <ozdotnet@ozdotnet.com <mailto:ozdotnet@ozdotnet.com> >
Subject: Any opinions on the Dell XPS 15 laptops?

 

? As per subject ?

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax


SQL Down Under | Web:  <http://www.sqldownunder.com/> www.sqldownunder.com

 



RE: SPAM-LOW: RE: [OT] Internal Developer Training

2016-02-09 Thread Paul Glavich
Oh yes, had that happen a few times.

 

In addition, you get a few people (sometimes very few) who are keen to present 
but the majority who are happy to simply attend and don’t put much effort into 
presenting. As long as you can keep the balance good, it won’t peter out over 
time.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Andrew Coates (DX AUSTRALIA)
Sent: Tuesday, 9 February 2016 4:21 PM
To: ozDotNet 
Subject: SPAM-LOW: RE: [OT] Internal Developer Training

 

7’s probably at the bottom end of enough for critical mass. You don’t need many 
people to be on leave, sick or working on that urgent project before someone’s 
doing a presentation they spent 6 hours of their own time prepping for to 2 or 
3 people.

 

Is there anyone else in the org you could rope in? Testers, *gasp* designers? 
Etc?

 

Andrew Coates, ME, MCPD, MCSD MCTS, Developer Evangelist, Microsoft, 1 Epping 
Road, NORTH RYDE NSW 2113
Ph: +61 (2) 9870 2719 • Mob +61 (416) 134 993 • Fax: +61 (2) 9870 2400 •  
 http://blogs.msdn.com/acoat

 

From: ozdotnet-boun...@ozdotnet.com   
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Dave Walker
Sent: Tuesday, 9 February 2016 4:16 PM
To: ozDotNet  >
Subject: RE: [OT] Internal Developer Training

 

Cheers this looks awesome! Team is 7 people so not small. 

On 9 Feb 2016 17:22, "Andrew Coates (DX AUSTRALIA)" 
 > wrote:

Hi Dave,

 

How big is your team?

 

One of the things we’ve seen work well is to have a regular “internal user 
group” every fortnight, put 2 hours aside over lunch or towards the end of the 
day, bring in pizza, and have someone from your team do a general technical 
presentation (45-60 min). Then have someone present a technical overview of 
their project (or part thereof).

 

For example:

 


Time

Topic

Presenter


2:00-2:15

Welcome, Q

Group Leader


2:15-3:15

Technical Presentation
(e.g. “Using Windows Communication Foundation to Interface with SAP”)

Developer/Architect from within organisation


3:15-3:30

Break

All


3:30-4:00

Project Presentation
(e.g. “Project Blackcombe: Challenges, Solutions and Status”)

Project Blackcombe lead developer


4:00-5:00

Drinks/Networking

All

 

Over 6 meetings you could do something like this:

 


Technology Session

Internal Session


Month 1

Customising Office with Add-ins

Exposing our CRM information inside the firewall


Month 2

jQuery integration in VS2015

How we updated our external site to use Bootstrap


Month 3

Branching and Merging – a primer

Project “Discovery”’s use of TFS for source control


Month 4

Mobile Client Development Smackdown – Native vs Xamarin vs Cordova vs HTML5

Deploying our new ERP solution


Month 5

Using geographic data in SQL2014

Geolocating our customers


Month 6

Introduction to Aspect Oriented Programming

Adding unit tests to the project “Conquistador” code base

 

Make a bit of a big deal about the group. Encourage people to present (give 
them a speaker shirt or something). Get evals at the end of each session. Give 
the top speaker for the year a trip to Ignite, or something. Note that 
technical presentations don’t have to be original – there are heaps of 
repositories of up-to-date technical presentations complete with presenter 
notes, demo scripts and so on.

 

Giving a developer a presentation to deliver means that they’ll go away and 
play with the tech so they can at least run the demos. It gives them a bunch of 
soft skills as well, and it makes them the internal “expert” in that thing. 
People will ask them questions about it and that will kick off the cycle of 
discovery for them. They’ll tend to look up the answer to those questions if 
they don’t already know.

 

Note that you’ll need an exec sponsor for this – taking the team off the tools 
for a couple of hours (or 3) a month is a commitment they’ll need to support.

 

This works even better if it’s not just your team – cross-pollination and 
emergence of technical centres of excellence within the organisation are very 
desirable things.

 

Happy to chat more either here or offline if you like.

 

Cheers,

 

Coatsy.

 

Andrew Coates, ME, MCPD, MCSD MCTS, Developer Evangelist, Microsoft, 1 Epping 
Road, NORTH RYDE NSW 2113
Ph: +61 (2) 9870 2719   • Mob +61 (416) 134 
993   • Fax: +61 (2) 9870 2400 

 http://blogs.msdn.com/acoat

 

From: 

RE: (Azure service) Logging

2016-02-07 Thread Paul Glavich
Hey Corneliu,

 

For one of our projects, we used Azure app insights to log everything. 
Normally, you’d inject some JS into a page and it would log page visits, but we 
do not use it for that. We simply log events and use its search to aggregate 
and search data. It actually works really well. I haven’t tried it with the 
volume of data you have but I believe it would be ok.

 

Ping me directly and I can show you

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Corneliu I. Tusnea
Sent: Monday, 8 February 2016 10:44 AM
To: ozDotNet 
Subject: (Azure service) Logging

 

Hi,

 

How do you guys do logging in your application? And how do you search through 
logs for various issues?

 

We have a cloud app deployed in Azure and we implemented our own logging that 
logs both to disk in nice neat log files and to Azure table storage.

This works great but it's hard to do searches through the logs at times. We 
generate around 1-5Gb logs of a day (and no, we can't really reduce that atm) 
and store 90-120 days of logs.

 

What are some good ways to store & search through these logs?

 

I looked at getseq.net   and the associated libraries but I 
don't know if I want to deploy a new server to handle just logging.

 

Thoughts?

 

Thanks,

Corneliu.

 



RE: Anyone using REST based SMS services for Australia that they can recomened

2015-12-08 Thread Paul Glavich
We use SMSGlobal http://www.smsglobal.com/

 

Pretty good. Reliable, multiple ways to send and also pay.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Arjang Assadi
Sent: Thursday, 3 December 2015 5:47 PM
To: ozDotNet 
Subject: Anyone using REST based SMS services for Australia that they can 
recomened

 

Hello

 

Anyone using REST based SMS services for Australia that they can recommend?

 

Azure based would be even better!

 

Regards

 

Arjang



RE: [OT] SSL testing

2015-11-03 Thread Paul Glavich
I have run that script on our staging and production servers. Works well.

 

Take a registry backup prior. Run it. If issues, then restore.

 

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Tuesday, 3 November 2015 12:00 PM
To: ozDotNet 
Subject: Re: [OT] SSL testing

 

"An F grade is unacceptably bad, definitely something he needs to get sorted. 
Hold the web developer / company accountable for that."

 

I could barely sleep last night knowing that I'd flunked with an F. The trouble 
is, I don't know who to blame (I am the developer and the company!!). My web 
server is a pretty vanilla Win2008R2 install and I got the cert from Comodo 6 
months ago. I sort of expected that regular Windows Updates would be fixing 
this sort of thing, or perhaps I'd get some sort of security alert somehow. Why 
are out-of-the-box servers falling behind best security practises?

 

I want my server to get an A, but the script I mentioned before worries me and 
I'd prefer some specific and trustworthy instructions from somewhere like 
TechNet, a KB or MSDN to tell me exactly what to do.

 

Greg K



RE: SSL Certs

2015-11-02 Thread Paul Glavich
Hey Greg,

 

Not sure on the .com.au and .com SSL certs as I have always grabbed a cert
for one or the other. You could look at something like this
https://www.digicert.com/ev-multi-domain-ssl.htm which is a multi-domain
cert but even then I am not sure whether you could do .com and .com.au. You
may be better off getting 2 simple SSL certs. Azure will let you add
multiple domain names to a site, as long as you own the domains, it is
pretty easy, as well as adding in multiple certificates.

 

While you could technically route the requests from .com.au to .com (and
probably via DNS is best that is, .com.au goes to same ip as .com) unless
you have a cert to match the domain, it will complain. At least that is how
I understand it anyway.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Greg Low (??)
Sent: Monday, 2 November 2015 9:08 PM
To: ozDotNet 
Subject: RE: SSL Certs

 

I suppose a more basic question is:

 

What’s the cleanest way in an Azure website MVC app to route all requests
for abcdef.com.au to abcdef.com ?

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913
fax 

SQL Down Under | Web:   www.sqldownunder.com

 

From: Greg Low (罗格雷格博士) 
Sent: Monday, 2 November 2015 9:03 PM
To: ozDotNet  >
Subject: SSL Certs

 

Hi Guys,

 

If using two domains like:

 

abcdef.com 

and 

abcdef.com.au 

 

(and obviously the site also has the www. versions of those too).

 

For SSL on Azure websites, thoughts on whether we should do two certs, or
just do that on one of them and then do some sort of redirect for the other
one? (It’s MVC)

 

TIA,

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913
fax 

SQL Down Under | Web:   www.sqldownunder.com

 



RE: SSL Certs

2015-11-02 Thread Paul Glavich
Actually, maybe this would suit you better
https://www.digicert.com/unified-communications-ssl-tls.htm

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Greg Low (??)
Sent: Monday, 2 November 2015 9:08 PM
To: ozDotNet 
Subject: RE: SSL Certs

 

I suppose a more basic question is:

 

What’s the cleanest way in an Azure website MVC app to route all requests
for abcdef.com.au to abcdef.com ?

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913
fax 

SQL Down Under | Web:   www.sqldownunder.com

 

From: Greg Low (罗格雷格博士) 
Sent: Monday, 2 November 2015 9:03 PM
To: ozDotNet  >
Subject: SSL Certs

 

Hi Guys,

 

If using two domains like:

 

abcdef.com 

and 

abcdef.com.au 

 

(and obviously the site also has the www. versions of those too).

 

For SSL on Azure websites, thoughts on whether we should do two certs, or
just do that on one of them and then do some sort of redirect for the other
one? (It’s MVC)

 

TIA,

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913
fax 

SQL Down Under | Web:   www.sqldownunder.com

 



RE: [OT] SSL testing

2015-11-02 Thread Paul Glavich
You generally should fix these as it means your system is open to information 
leakage or inspection from malicious people. Depending on the site and what it 
hosts, this may not be a big issue but the tools to exploit these holes get 
more common as time goes on.

 

To fix the certificate issues, just get a new cert from somewhere like Digicert 
that offers quality certificates that are quite cheap (note: if you have to 
support older OS’s like Windows XP, they will not have the necessary root 
certificates installed and thus complain about your cert).

 

For the other warnings, you generally have to patch the OS to some degree. On 
windows systems there is a simple powershell script that you run which alters 
the registry and disables to fallback to older algorithms that have exploits. 
It does depend on the OS level though as to how much you need to do. I attached 
the powershell script I used to disable a older algorithms on one of my servers 
but make sure it suits your OS. I don’t have the link handy where I got it from 
tho. Sorry

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Greg Keogh
Sent: Monday, 2 November 2015 2:40 PM
To: ozDotNet 
Subject: Re: [OT] SSL testing

 

I noticed a mate's shopping site over the weekend returning the following in 
the connection info for the certificate:

 

I just tested my own domain with its 6 month old certificate. I also got a 
series of frightening warnings:

 

This server supports SSL 2, which is obsolete and insecure. Grade set to F. 
This server is vulnerable to the POODLE attack. If possible, disable SSL 3 to 
mitigate. Grade capped to C.
Certificate uses a weak signature. When renewing, ensure you upgrade to SHA2.
The server supports only older protocols, but not the current best TLS 1.2. 
Grade capped to C.
This server accepts the RC4 cipher, which is weak. Grade capped to B.

 

The long and detailed list of test results are quite complicated. I'm not happy 
about getting an F for flunk grade, but I'm not sure what I can do about it, or 
if I'm even supposed to do anything.

 

Comments ... anyone knowledgeable on these matters?

 

Greg K



DisableSSLv3.ps1
Description: Binary data


RE: [OT]Work in Sydney.

2015-09-17 Thread Paul Glavich
As Greg mentioned, how strong? In addition, any other skills, front end,
server side or both?

 

We may an opportunity or two soon.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Greg Low (??)
Sent: Thursday, 17 September 2015 11:44 PM
To: ozDotNet 
Subject: Re: [OT]Work in Sydney.

 

How strong on .NET ?

Regards 

 

Greg

 

Dr Greg Low

SQL Down Under

+61 419201410

1300SQLSQL (1300775775)


On 17 Sep 2015, at 10:04 PM, David Rhys Jones  > wrote:

Hi all  

I've got a colleague who is moving to Sydney next wednesday to start his
visa process. 

 

Does anyone know of .Net positions that are open?

 

Thanks
Davy




 

Si hoc legere scis nimium eruditionis habes.

 



RE: Re-request - Office chair again

2015-09-15 Thread Paul Glavich
Thanks again Dave.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Dave Walker
Sent: Wednesday, 16 September 2015 8:23 AM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: Re-request - Office chair again

 

Hi, sure. 

 

I started that convo and ended up going with an Aeron 
http://www.hermanmiller.com/products/seating/performance-work-chairs/aeron-chairs.html
 - so far it's amazing. Never even notice it's there vs my chair at work which 
causes me regular lower back pain.

 

Other options included the Steelcase Leap 
http://www.steelcase.com/products/office-chairs/leap/

 

I can't remember any others these were two top two for me.

 

 

On 16 September 2015 at 10:18, Paul Glavich <subscripti...@theglavs.com 
<mailto:subscripti...@theglavs.com> > wrote:

Hi all,

 

I know there was a thread on office chairs here not so long ago and I remember 
Greg Low and a few others recommend a particular brand which I had intended to 
look into. Recent spinal issues have caused me to revisit this sooner rather 
than later but I have since lost the email and was wondering if someone could 
just send (off or on list) that chair recommendation again please?

 

-  Glav

 

 



Re-request - Office chair again

2015-09-15 Thread Paul Glavich
Hi all,

 

I know there was a thread on office chairs here not so long ago and I
remember Greg Low and a few others recommend a particular brand which I had
intended to look into. Recent spinal issues have caused me to revisit this
sooner rather than later but I have since lost the email and was wondering
if someone could just send (off or on list) that chair recommendation again
please?

 

-  Glav

 



RE: PayPal Integration

2015-09-15 Thread Paul Glavich
Well we have had no complaints thus far (it is in beta atm), in fact we have 
received some really great feedback (giving in the range of 7-9 out of 10).

 

In terms of paypal, it looks like we have to use it for hosting so when we get 
to that part, it will probably involve a redirect to a hosted payment page. We 
provide verbiage stating our security involved and specifically not doing 
anything with the payment details and letting the respective sites handle all 
that. Again, so far so good.

 

In addition to this, a lot of customers have requested recurring payments and 
having their details saved by us. By details, they mean their credit card 
details so that we can re-use. The implementation of this seems secondary to 
them and they are opting for convenience and assuming security. Again, we don’t 
store any payments/credit card details and never will but these requests from 
customers don’t even mention paypal and/or secure storage, they just assume it.

I think this says something about perception in general though. In addition, we 
have had the odd payment gateway failure here (we use TNSI locally) and when it 
happens we actually have customers email through their entire credit card 
details (yes name, date, card no and CCV) to our accounts team to pay their 
monthly bill because they could not be bothered with the hassle of retrying 
later or perhaps left it too late or something. Yes, it is a thing. Smart 
business people are doing this.

 

Bottom line out of all this (for us anyway) is convenience seemingly trumps 
security, security is a given. I guess until something happens that is security 
related.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of DotNet Dude
Sent: Tuesday, 15 September 2015 3:07 PM
To: ozDotNet <ozdotnet@ozdotnet.com>
Subject: Re: PayPal Integration

 

Totally agree with Greg here. Unless you are a major site that I somewhat 
"trust" (as if) I won't purchase if there is no paypal option that goes to 
paypal.

On Tuesday, 15 September 2015, Greg Low (罗格雷格博士) <g...@greglow.com 
<mailto:g...@greglow.com> > wrote:

Hi Glav,

 

Do you find that customers are happy entering their details into a form on your 
site?

 

As a consumer, I’d be happier entering details directly into a PayPal, Stripe, 
eWay, etc. site. Otherwise, how do I have any idea how you are handling them?

 

It’s one of the reasons that I’m seeing more customers wanting to use PayPal. 
At least their card details are only stored on a single site.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax 

SQL Down Under | Web:  <http://www.sqldownunder.com/> www.sqldownunder.com

 

From: ozdotnet-boun...@ozdotnet.com 
<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>  
[mailto:ozdotnet-boun...@ozdotnet.com 
<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');> ] On Behalf Of 
Paul Glavich
Sent: Tuesday, 15 September 2015 2:16 PM
To: 'ozDotNet' <ozdotnet@ozdotnet.com 
<javascript:_e(%7B%7D,'cvml','ozdotnet@ozdotnet.com');> >
Subject: RE: PayPal Integration

 

Yep. We have recently integrated with Stripe, Braintree and eWay. Out of all 
three, Braintree and stripe are the easiest. eWay has a lot of redirecting 
going on. Braintree and stripe client portions are very good. eWay’s 
site/portal is a little messy while the others I found more straightforward.

 

Also, we recently had a visit from Paypal to integrate with them (even though 
Braintree are affiliated with them). They only have a hosted payment option and 
we actually didn’t want that at all. All 3 (eWay, stripe and Braintree) go 
through a single form on our site but we don’t host or collect any payment 
creds, it is just our custom form.

Paypal do have a new payment system which requires no initial setup. You simply 
provide an email and payments go to paypal under that email. They are ‘held’ or 
‘buffered’ there until such time as the full setup is completed which actually 
makes setup really easy from a customer perspective.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com 
<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');>  
[mailto:ozdotnet-boun...@ozdotnet.com 
<javascript:_e(%7B%7D,'cvml','ozdotnet-boun...@ozdotnet.com');> ] On Behalf Of 
DotNet Dude
Sent: Monday, 14 September 2015 4:59 PM
To: ozDotNet <ozdotnet@ozdotnet.com 
<javascript:_e(%7B%7D,'cvml','ozdotnet@ozdotnet.com');> >
Subject: Re: PayPal Integration

 

Anyone used braintree?

On Monday, 14 September 2015, Greg Low (罗格雷格博士) <g...@greglow.com 
<javascript:_e(%7B%7D,'cvml','g...@greglow.com');> > wrote:

Yep I like Stripe and eWay was quite simple but bunches of clients want to use 
PayPal plus it seems to do the best multicurrency work.

 

I presume the increasing preference for PayPal is to only have one se

RE: PayPal Integration

2015-09-14 Thread Paul Glavich
Yep. We have recently integrated with Stripe, Braintree and eWay. Out of all 
three, Braintree and stripe are the easiest. eWay has a lot of redirecting 
going on. Braintree and stripe client portions are very good. eWay’s 
site/portal is a little messy while the others I found more straightforward.

 

Also, we recently had a visit from Paypal to integrate with them (even though 
Braintree are affiliated with them). They only have a hosted payment option and 
we actually didn’t want that at all. All 3 (eWay, stripe and Braintree) go 
through a single form on our site but we don’t host or collect any payment 
creds, it is just our custom form.

Paypal do have a new payment system which requires no initial setup. You simply 
provide an email and payments go to paypal under that email. They are ‘held’ or 
‘buffered’ there until such time as the full setup is completed which actually 
makes setup really easy from a customer perspective.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of DotNet Dude
Sent: Monday, 14 September 2015 4:59 PM
To: ozDotNet 
Subject: Re: PayPal Integration

 

Anyone used braintree?

On Monday, 14 September 2015, Greg Low (罗格雷格博士)  > wrote:

Yep I like Stripe and eWay was quite simple but bunches of clients want to use 
PayPal plus it seems to do the best multicurrency work.

 

I presume the increasing preference for PayPal is to only have one set of card 
details online instead of at numerous gateway sites.

Regards 

 

Greg

 

Dr Greg Low

SQL Down Under

+61 419201410

1300SQLSQL (1300775775)


On 14 Sep 2015, at 2:14 pm, Craig van Nieuwkerk  > wrote:

I have integrated with Stripe as well and it is great. Very easy, everything 
PayPal should be but for some reason isn't. 

 

PayPal just needs to draw a line in the sand with their old API(s), deprecate 
them, and build something new and easy like Stripe. 

 

On Mon, Sep 14, 2015 at 12:53 PM, Stephen Price  > wrote:

I was looking at the options recently but have only gotten to the decision made 
step, implementation will come later.  

I chose https://stripe.com/au/features

 

Don't have anything like a readers digest post but their docs look good with 
examples for many languages. I enquired about eWay some time ago and have been 
getting spammed ever since. Told them I chose someone else and still get their 
emails. I should hit the unsubscribe button now that I'm thinking of it. 

 

On Sun, 13 Sep 2015 at 16:27 Greg Low (罗格雷格博士)  > wrote:

Hi Folks,

 

Been using eWay for ages and integrating with it was pretty trivial.

 

Wanting to add PayPal now (to existing MVC app in VS2015). Been reading the 
developer doco and it seems like a convoluted mess. Presumed one of you must 
have been doing this lately.

 

I need to know the outcome for the next processing stage so it does need one of 
the notification-based options, rather than just HTML forms or something.

 

There seem to be endless discussions around IPN vs PDT vs Express. I want the 
customers to be able to pay via PayPal or by using a credit card when not 
PayPal members. I presume I can do this via the REST APIs.

 

Anyone got a “reader’s digest” version of the minimal code that’s required to 
simply add the ability to take a payment? 

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775  ) office | +61 419201410 
  mobile│ +61 3 8676 4913   
fax 

SQL Down Under | Web:   www.sqldownunder.com

 

 



RE: TypeScript summary

2015-09-12 Thread Paul Glavich
Corneliu et al,

 

Just to add/answer a few extras

* Yeoman is a scaffolder – install it and go “yo Aurelia” (from memory) 
and you get an entire working project all setup

* Angular2 and Aurelia are both reasonable choices (IMHO). I do like 
the Aurelia syntax better

o   The mobile performance will be critical to both of these

* Both are reasonable learning curves

* The future is ES6/Ecmascript2015, modules and classes (IMHO). 
Typescript helps but ES6 is the killer syntax (IMHO). Learn ES6 and you can 
really start to structure your JS packages much better but coupled with a 
module loader/dependency manager, it is very good (see point below).

* Coupled above with an excellent package manager like JSPM which can 
load, minify and bundle your JS dependencies and packages. It really is good. 
It is smart enough to know what all your dependencies are and make them 
available in one file.

* >> Side note> Angular1 requires a massive amount of work to get 
anything working

o   I don’t really agree with that although it does take more work. We use 
Angular 1 with little work and have a complex set of functionality built using 
it and it really helped. But that’s a side note/opinion.

* Been also using Ionic which is based on Angular 1 (amongst other 
things). Its great. It will be moving to Angular2 when its out so I’d say its 
worth learning both Angular2 and Aurelia just for that.

 

Finally, if you learn nothing else, learn ES6 and JSPM. They can really help in 
your decision making.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Corneliu I. Tusnea
Sent: Wednesday, 9 September 2015 4:14 PM
To: ozDotNet 
Subject: Re: TypeScript summary

 

Thomas,

 

You can just add aurelia to the head and be done and started just like 
Angular(1) albeit your productivity will be slow.

 

Your issues sound to me like saying I can just open Notepad and start coding my 
C# project. Why would I install Visual Studio?

Why would you install Nuget or MSBuild or System.Web.Optimization libraries to 
bundle JS files? You installed them all as part of Visual Studio, that's the 
only difference.

- node.js is like .Net framework (that comes these days as part of Windows)

- gulp is msbuild

- nuget is npm and bower

- System.Web.Optimization is like jspm + nuget

- Yeoman - I have no idea, I haven't installed or used that

- systemjs is not required, it's a nice to have to make things easier to load 
and do the bundling/dependency resolving to avoid you to "just add another .js 
file to the head". You can keep doing that and not need systemjs. Kind of the 
.Net BundleCollection on steroids.

- Babel - don't know, didn't use it.

- TypeScript - it's an awesome option that compiles down to JS directly without 
Babel. You really want to use this unless to avoid writing JS. Typescript looks 
and feels like C# instead of JS.

Again, it's optional but heck, I hate JS

 

You can get prepared startup projects for VisualStudio with none of the above 
odd tools:

https://github.com/cmichaelgraham/aurelia-typescript/tree/master/skel-nav-require-vs-ts

Clean, .Net solution with couple of JS files. 

 

Side note> Angular1 requires a massive amount of work to get anything working 
and get a project more than a simple demo of the ground. Angular2 has a hard to 
read syntax. How am I supposed to make the difference between (click) and 
[click] and {click} and what each does?

 

Look, I totally hate JS and I only started to use these tools myself last week, 
I also found the confusing at times and all have funny names and can't figure 
out why there are configurations for requirejs, amd, system, systemjs and 4 
other loader libraries or what are the differences between them but heck, after 
few days of work I got something cool working, and a great UI that I tried to 
build before in Angular and I hated myself every day I had to learn some random 
new awkward behaviour, directive, service, provider, filter ...

 

I found Aurelia to rock in design and simplicity compared to Angular and found 
it fast to learn and apply.

 

Just my 2 cents.

 

 

 

On Wed, Sep 9, 2015 at 2:24 PM, Thomas Koster  > wrote:

On 9 September 2015 at 13:18, Corneliu I. Tusnea
 > wrote:
> Compared to Augular2 Aurelia simply rocks and it's so dead easy to
> setup.

Aurelia looks interesting, but a quick scan through "Getting
Started" [1] reveals that you need the following to, ah, get started:

- node.js for the entire toolchain,
- Gulp to build,
- jspm or bower for front end package management,
- Yeoman for scaffolding,
- systemjs for client-side DI,
- Babel, CoffeeScript or TypeScript for "compiling" to browser-
  compatible ES5/JavaScript.

I have none of these things installed, yet I can start a new AngularJS

RE: TypeScript summary

2015-08-27 Thread Paul Glavich
 JS ecosystem can go to hell.

Lol. It has been there already. :) It re-wrote hell in the form of a closure.

 

Seriously though in answer to react comment below, I too find react’s syntax 
atrocious. Note that there is nothing at all related to react and C#/MVC. It is 
a fast rendering system by way of the shadow dom usage. It does have a good 
composition model but I simply cannot stand its syntax. You give up an easy to 
read syntax for speed and composability. Flux is a pattern library that is an 
augmentation to react that I think is quite good but could be used without 
react as well.

 

It is the new black in terms of frameworks to use though so people are saying 
its awesome and everything else is crap, which is kind of the polarising 
community of JS dev. It is only at version 0.13.3 so it is so immature I would 
not entertain it at this time, but many are.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tony Wright
Sent: Wednesday, 26 August 2015 12:11 PM
To: ozDotNet ozdotnet@ozdotnet.com
Subject: Re: TypeScript summary

 

I wouldn't mind knowing what is so good about React. I'm not enjoying the 
syntax of React so far. At the moment if I was to build a new substantial app 
it would be using Angular. I feel that you can write some pretty substantial 
applications in Angular. Having had a dabble with React, I don't get the same 
feeling, so I am wondering if the hype is bigger than the product itself?

 

I know React is more about the V in MVC and Angular covers the entire MVC 
pattern in Javascript, but I am trying to understand - are they still 
essentially trying to solve a similar problem? I can go without using C# MVC 
applications at all (excepting WebApi) with Angular, so is the difference that 
React is meant to be used in conjunction with C# MVC solutions?

 

 

 

On Wed, Aug 26, 2015 at 11:57 AM, William Luu will@gmail.com 
mailto:will@gmail.com  wrote:

RE: DOM manipulation.

 

Here's a (intro and) comparison between DOM manipulation jQuery and React

http://reactfordesigners.com/labs/reactjs-introduction-for-people-who-know-just-enough-jquery-to-get-by/

 

On 26 August 2015 at 10:03, Bec C bec.usern...@gmail.com 
mailto:bec.usern...@gmail.com  wrote:

+1 for Greg's comments. Coming from a sql background I found it relatively easy 
to jump into c# and .net but my jump to JS wasn't so smooth

 

 

On Wed, Aug 26, 2015 at 9:55 AM, Greg Keogh gfke...@gmail.com 
mailto:gfke...@gmail.com  wrote:

I hope this is my final essay on JavaScript (and so do you!). In summary, a few 
weeks ago I volunteered to write an in-browser script driven demo app which is 
simply a navigation stack of 4 screens. Angular is so currently so trendy I 
spent several hours attempting to learn and use it, but due to lack of an IDE, 
no debugging, no guidance, the custom terse syntax and complex dependencies I 
gave up (then I learn it's being rewritten in TypeScript anyway). I've 
expressed my anger at the 'zoo' of uncoordinated and competing JS libraries.

I spent all of yesterday optimistically studying and trying TypeScript, as the 
familiar IDE and structure seemed ideal for someone from a C++/Java/C# 
background. Given my belief that the JS world is really chaotic, my overall 
conclusion is:

TypeScript is organised chaos.

I was reminded of moving from C to C++ 20 years ago. C was so freeform you 
could write spaghetti. C++ helped you write object oriented modular spaghetti. 
Just like that, TS is trying to tame the JS spaghetti and make it feel OOPish 
and respectable to people with my background, but it's still just putting a 
wedding gown on a pig.

The good news is though, that once I eventually found guidance on how to 
organise multiple TS source files, how to use module { } like namespaces, when 
to use the reference, and why you use --out to concat files, then TS is 
probably the least worst option I've seen so far for writing large JS apps. At 
least you will finish up with organised modular chaos.

So you might be able to tame JS with TS, but we are still stuck with the 
cumbersome DOM and jQuery. While trying to give my web page app behaviour I had 
to have jQuery reference web pages continuously open so I could remember the 
arcane and inconsistent syntax to do the simplest things like toggling 
visibility or setting text or class attributes. This isn't really a JS related 
problem, but I find manipulating the DOM from JS and jQuery tedious beyond 
endurance.

In fact my endurance is exhausted. I will not write the demo and have 
commissioned someone else to do it. They write this sort of thing for a living, 
so I look forward to learning how they do it. I've learnt a lot in recent weeks 
anyway and have decided that for future work like this I will use TS and jQuery 
because they're the least worst (for now), and the rest of the JS ecosystem can 
go to hell.

 

Greg K

 

 

 



RE: Last words on AngularJS

2015-08-27 Thread Paul Glavich
Agree with that.

 

Best practice is a furfy. You don’t get one with server side either. JS is just 
more finicky. I often use Yeoman to generate a folder/project structure but 
never really use it verbatim. I am too opinionated for that :)

 

I do give credit to Greg’s point around multiple dependencies and seemingly 
brittle nature of it all.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: Wednesday, 26 August 2015 12:26 PM
To: ozDotNet ozdotnet@ozdotnet.com
Subject: Re: Last words on AngularJS

 

I think the problem you are experiencing, Greg, is that you are looking for the 
right way to write Javascript apps. Is that what you mean by best practice?

 

I look at that as being similar to someone saying they are looking for the 
right woman. There is no right woman, there are just a large set of 
permutations of women. As soon as you try to apply rules of classification (ie 
a filter to apply to separate right from wrong) then you are applying an 
artificial, subjective ruleset. 

 

Try not to think of it in terms of right and wrong. Javascript is a guide, 
Greg. She can help you to find the path. 

 

  
http://t.signauxdeux.com/e1t/o/5/f18dQhb0S1Ll8dDMPbW2n0x6l2B9gXrW7sKj5C56dQtgf3ZlND602?si=6200614728499200pi=5ca12bb3-5c1e-4829-f544-699eb7379070
 

 

On Wed, Aug 26, 2015 at 8:18 AM, Greg Keogh gfke...@gmail.com 
mailto:gfke...@gmail.com  wrote:

Did you come across yeoman and angular generator?

https://github.com/yeoman/generator-angular#angularjs-generator-

Those tools scaffold/generate code base on “best practice”.

 

This is a great illustration of my gripe with the JS ecosystem.

 

Yeoman generator for AngularJS - lets you quickly set up a project with 
sensible defaults and best practices. There are many starting points for 
building a new Angular single page app, in addition to this one. To see a 
comparison of the popular options, have a look at this comparison.

 

Due to best practise confusion we need a JS tool to generate sensible code 
which wraps the underlying JS language and you need to install yo, grunt-cli, 
bower, generator-angular and generator-karma as dependencies to make it all 
work. I reads like an IT comedy sketch.

 

I'll bet there are people arguing that the best practices aren't the best and 
they know and have implemented better ones! I might write a best practice 
generator in JS and when it's bootstrapped far enough I'll get it to write 
itself.

 

Greg

 



RE: Last words on AngularJS

2015-08-24 Thread Paul Glavich
We use and have used AngularJS with great success. It (IMHO) assists to bring 
order to chaos. We haven’t really had much of an issue apart from an initial 
learning curve and have built some very complex pieces with it. In addition, 
the resulting code is relatively easy to maintain.

 

I will agree there is plenty of confusion about how to best do things, which 
ironically is also one of its strengths. It does sound like you want a very 
opinionated and prescriptive framework and there are plenty. 

 

Right now though, front end dev is an absolute lottery of frameworks and 
package managers. Pick a framework, hell pick 10, include them with any number 
of package managers and there is a good chance it won’t be the best solution in 
a years time (and if it is, keeping up with changes to all your dependencies 
can be tiiugh), Front end tooling is currently obsessed with the next new 
thing, rather than getting the job done with a mature thing. While we have 
currently leaned towards AngularJS for current dev, I will be sitting back and 
watching the space before committing to any future frameworks, whether Aurelia, 
react, Angular2, Ember or other. A few will gain ground and possibly fall from 
favour as kinks and real apps put them to the test. By real apps, I mean 
average teams doing LOB apps and such. It is great to say a framework is great 
when used by a team of interstellar genius types. Your team may not fare so 
well tho.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Scott Barnes
Sent: Sunday, 9 August 2015 9:02 PM
To: ozDotNet ozdotnet@ozdotnet.com
Subject: Re: Last words on AngularJS

 

Adobe Flex, Silverlight and WPF all have the same techniques described and 
issues with AngularJS. The issue in question is more around the ability to 
load/unload views in an elegant fashion that leaves you with a sense of 
simplicity or cleanliness in memory collection as well.

 

Binding is also a huge issue, it was never really rectified as cleanly as I had 
hoped over the years as i still see binding a problem similiar to how I guess 
Entity Framework started out I want to visualise how that field gets its 
values and trace its origins back through the rest api's down to the metal if 
need be..

 

As that's where profiling and stuff comes back to the forefront and helps steal 
some of the sting out of exceptions.

 

I think you're on the same hunt we've always been on since 2005-2009 whereby we 
want to create inline apps that have deep linking style loading but without the 
complexity and code management overheads.

 

AngluarJS or whatever isn't really meant to last beyond maybe a year or two. 
Anyone who's still shooting for an app that gets designed in 2015 and still 
useable and manageable in 2020 is on a fools errand as today, the modernizing 
of apps is constantly going to push your comfort levels. Microsoft is also 
quite hungry to regrow its grass roots so i'd expect a bit more of healthy 
chaos from them here as well.

 

That all being said, the JS route is steps backwards not forwards as its still 
trying to pickup from lost ground that tech like Winforms, Silverlight, WPF and 
Adobe Flash/Flex (yeah even these had it better) and it's still a bit of a 
hacky approach to obsfucating as much of free thinking JS from the devs as 
possible. 

I think you're feeling the inertia though of the wild js-west, in that there 
are really no rules here or compiler feedback loops.. you write it, it does 
something visually and you can't see any obvious signs of memory profilers 
going out of shape...hey...ship it... and that's the part that leaves me a bit 
personally nervous ;) ..as in the hands of a mature dev it could work great 
and longevity intact...but...in my experience not all teams are mature and 
you have a variety of styles of thinking / code here so it's now back to some 
serious code-reviews to maybe act as the last safeguard in thinking here?

 

*if* i had to pick i'd say AngularJS is probably the closest to the previous 
styles of thinking and that's probably the first red flag ;)




---
Regards,
Scott Barnes
http://www.riagenic.com

 

On Sun, Aug 9, 2015 at 7:35 PM, Greg Keogh gfke...@gmail.com 
mailto:gfke...@gmail.com  wrote:

We're you using RequireJS?
RequireJS is something you can use to bring in common and worker viewmodels.
It may be your missing link!

 

I just had a glance over the main web pages. In a rush I get impression that 
this is library that simulates dependencies between JavaScript files (because 
there is no such native concept). I can't picture in my head how this would 
boost productivity or enhance the development experience, it looks like just 
something else to clutter and confuse what you're doing. But it's late, so I 
might be missing the point and I need to read more -- GK

 



RE: Last words on AngularJS

2015-08-24 Thread Paul Glavich
Greg and others,

 

One of Javascript’s strength is also it’s weakness. You can do literally 
anything with it. It is one of the most flexible and adaptable languages there 
is. This (IMHO) is one of the reasons it is popular. With that, many people 
twist and change it to what they think is best, and there are plenty of 
differing opinions, so here we are.

 

As industry experts/veterans, it is always a challenge to look at the good 
parts of a framework/approach and:

a)  Accept the bad bits and use it

b)  Accept only the good bits and augment so that the bad bits are mitigated

c)   Watch and provide input to try and steer 
communities/frameworks/languages in the desired direction

d)  Do it all using the basic accepted tools currently available. This 
means things like just plain js/ jQuery/ES6(maybe using things like babel) etc.

 

It is all in flux right now hence my call to wait it out for a bit (which 
libraries gain community momentum). To expect a strict guidance on how to do 
things in a particular framework for a large application is always going to be 
contentious in our field because of the “it depends” clause. There is no one 
way. The fact that you have had to research something quite a bit should at the 
very last have helped you form a much leaner and clearer picture of what you 
want, which can feed into the constant decision process as well as design.

 

It is not easy but do not get too hung up on getting the perfect way via a 
particular tool (analysis paralysis). Pick the best possible that you think 
applies to you, weigh the risks and commit.  The rest you can tailor to what 
you want. Final note: On a current project we are using Angular, however there 
are legacy elements still working fine but using prototype.js. Point being, at 
the end of the day, if you are just using  plan old JS (whether via a 
particular library) it will continue to work for a long long time.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Adrian Halid
Sent: Monday, 24 August 2015 9:23 PM
To: 'ozDotNet' ozdotnet@ozdotnet.com
Subject: RE: Last words on AngularJS

 

In the world according to Github Javascript is now the number 1 popular 
programming language used in their repositories. Might be due to all the 
Javascript frameworks out there:).

It is also interesting to see the climb of Java from 7th to 2nd over the last 7 
years.

 

https://github.com/blog/2047-language-trends-on-github

 

 

 

Regards

 

Adrian Halid 

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Keogh
Sent: Monday, 24 August 2015 6:26 PM
To: ozDotNet ozdotnet@ozdotnet.com mailto:ozdotnet@ozdotnet.com 
Subject: Re: Last words on AngularJS

 

Paul, most of what you said actually supports my anguish over the lottery of 
kits, tools, packages and standards (ha!) and fads in the JavaScript 
ecosystem.

 

Over the last week or more since I expressed my dismay, I've been reading more 
and more about the zoo of frameworks that decorate JavaScript and attempt to 
hoist it up into the world of real languages. It's getting so stupid that the 
AngularJS seems to have decided to completely rewrite it for v2 using 
TypeScript, and someone got upset and split off to make Aurelia because it was 
more pure, but apparently they're friends again now, I think. It's worse than 
a zoo, it's like a steaming compost bin.

 

I got all excited about TypeScript last weekend and I spent an afternoon 
reading about it and fiddling to see if it has promise. So I create a new HTML 
project and I get one small source file that shows the time. The sample code is 
raw JS from the 90s and I have to go looking for a way to integrate jQuery 
and/or AngularJS into the project. So dozens of opinionated pages later I 
discover I just about have to reinvent the steam engine to try an integrate 
them, and there are literally dozens of experts all claiming they know they 
best way to do it, with all sorts of cryptic pseudo-functional coding tricks. I 
simply want to know how to structure a large TS project, but there is no 
reliable guidance anywhere, it's just a dogs breakfast.

 

This is what happens when a script becomes accidentally promoted to become the 
new fangled language to drive LOB apps in the web without proper planning by 
industry experts and academics. There are no conventions for code or project 
structure, references, dependencies, building, testing ... anything! ... it's 
just a bottomless kludge of more tools made in JavaScript to try and make 
itself look and behave sensibly.

 

I am now overwhelmed by despair at what damage JavaScript has done to software 
development in the 21st century. I know there are lots of younger developers 
out there who shrug and think what's so bad, it's working, but I think 
they're just used to suffering and take it for granted.

 

Greg

 


RE: [OT] Ultrabook for noob

2014-11-27 Thread Paul Glavich
Yep they were really good. They seem like a small shop but all emails were 
answered promptly and when I initially bought the machine, I paid (I think) 
$1800 for it (received machine and all good), then they (affordablelaptops) 
said their supplier made a small error in price and I was supposed to pay only 
$1740 (or something like that) and so they sent me a credit. They did this 
without any contact from me and did it proactively which I think is really good.

 

-Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tom P
Sent: Thursday, 27 November 2014 5:57 PM
To: ozDotNet
Subject: Re: [OT] Ultrabook for noob

 

Thanks Glav. Somebody recommended affordablelaptops last week but I couldn't 
recall the name. Have you had any experience with their warranty/service?

 

Thanks

Tom

 

On 27 November 2014 at 17:52, Paul Glavich subscripti...@theglavs.com 
mailto:subscripti...@theglavs.com  wrote:

Highly recommend the Gigabyte P34v2 (I bought mine from Afforable Laptops 
www.affordablelaptops.com.au http://www.affordablelaptops.com.au  )

 

Specs are:

*I7-4700HQ (Up to 3.4Ghz)

*16Gb memory

*256 Gb SSD

*Dedicated Gfx card (Nvidia Geforce GTX 860M with 2 or 4Gb GDDR memory 
– can’t remember) in addition to the Intel 4600 (switches to 4600 in battery 
mode)

*About 1.7Kg

*Battery life is great.

 

Its actually a gaming rig but very thin and light.

 

-Glav

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com  
[mailto:ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com ] 
On Behalf Of Tom P
Sent: Thursday, 27 November 2014 3:00 PM
To: ozDotNet
Subject: Re: [OT] Ultrabook for noob

 

My thoughts exactly! :) I was thinking more about how long it normally lasts 
before i7 and 16Gb RAM become too little

 

Thanks

Tom

 

On 27 November 2014 at 14:56, Stephen Price step...@perthprojects.com 
mailto:step...@perthprojects.com  wrote:

I only appreciate mine for six months. After that I want a new one. ;)

 

On Thu, Nov 27, 2014 at 11:45 AM, mike smith meski...@gmail.com 
mailto:meski...@gmail.com  wrote:

ATO allows laptops to depreciate over 3 years, desktops 4

 

https://www.ato.gov.au/Individuals/Income-and-deductions/In-detail/Deductions-for-specific-industries-and-occupations/IT-professionals---claiming-work-related-expenses/?page=21

 

On Wed, Nov 26, 2014 at 5:50 PM, Tom P tompbi...@gmail.com 
mailto:tompbi...@gmail.com  wrote:

Hi Stephen

 

Thanks for the quick response. Actually a coworker suggested this list a while 
ago but I forgot all about it.

 

Surface Pro 3 did have me interested at first but it is too small in my opinion 
and I prefer to just use the laptop and not have to hook up to an external 
monitor and keyboard and so on. Even a 13 has me concerned. I may go with 15.

 

I've heard great things about the Macbook but the keyboard didn't feel right to 
me for Windows.

 

I'll check out the XPS 15.

 

Wow, 16Gb RAM? I didn't realise that was such an issue. 8Gb would be plenty for 
me I think but I guess going forward that will matter. How often do people 
change laptops? Is 3-4 years a stretch?

 

Thanks

Tom

 

 

On 26 November 2014 at 17:02, Stephen Price step...@perthprojects.com 
mailto:step...@perthprojects.com  wrote:

Welcome Tom!

(OMG where did we get a new poster from?)

 

Having more than a few laptops (both past and present) I feel slightly 
qualified to reply. I've found Dell pretty good, but always get the longest 
warranty you can get your hands on. It's happened a couple of times where a 
laptop has needed parts/repairs and its been out of warranty. When that happens 
its usually better to upgrade than spend money on it. 

 

I'm currently running a Mac book Pro 13 (for iOS dev cross platform stuff with 
Xamarin), a Surface Pro 3 (for most dev) and an Asus gaming laptop (amazing 
machine but a bit too heavy to lug about. Awesome for gaming at a mates place, 
or when others bring their laptops and you want to be sociable in the same 
room). 

The only thing that stops me from saying get a surface pro 3, is the RAM limit 
of 8Gb. If it could have 16Gb it would be the way to go, hands down. The other 
two laptops both have 16Gb and its really the only thing that lets the Surface 
Pro 3 down (spec wise). That said its the most portable, and most adaptable 
(laptop or tablet mode) and even wins on battery life by a huge margin. 

 

That said, the real answer is it depends. You need to look at what you want 
it for and makes sure whatever you get fits that first. Oh, I had a Samsung 
Ultrabook (the QuadHD touch screen one) and was disappointed with the high DPI 
experience of Windows 8. Passed it to my daughter for Uni laptop and she loves 
it. 

I almost got the Dell XPS 15 (with the QuadHD touchscreen) but got the surface 
pro 3 instead. So far not regretted that decision but I daresay the Dell would 
have also

RE: SSL for ASP.NET MVC

2014-11-27 Thread Paul Glavich
External content can be tricky since you do not control whether its available 
via https so check on that.

 

Additionally, don’t do something like script src=”http://somewhere/jquery.js”

As when you go to SSL it will complain about loading insure content and fail. 
For the most part, using MVC and relative Url’s you should not have to worry 
about it. If you need to embed some externals, you can optionally use the “//” 
syntax which adopts the browsers scheme when loading them so

 

script src=”//somewhere/jquery.js” 

Will equate to http://somewhere/jquery.js or https://somewhere/jquery.js 
depending on whether your site is using SSL or not.

 

Also, if using forms auth, you can enforce your login to be SSL via

authentication mode=Forms

  forms loginUrl=~/login timeout=2880 requireSSL=true /

/authentication

 

 

You could leave this out in development config but include in release config. 
There is also the [RequireSSL] attribute as well. See 
http://weblog.west-wind.com/posts/2014/Jun/18/A-dynamic-RequireSsl-Attribute-for-ASPNET-MVC

 

 

-Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Michael Ridland
Sent: Friday, 28 November 2014 8:49 AM
To: ozDotNet
Subject: Re: SSL for ASP.NET MVC

 

Hi Tom

 

It can be more complicated than that, take a look at this. 

 

http://nickcraver.com/blog/2013/04/23/stackoverflow-com-the-road-to-ssl/

 

 

 

 

 

On Fri, Nov 28, 2014 at 8:40 AM, Tom P tompbi...@gmail.com 
mailto:tompbi...@gmail.com  wrote:

Hi Noonie

 

That sounds good. So it can be turned on later on if necessary.

 

Is it necessary for me to demand SSL for LogIn type methods as those should 
definitely be secure in a live environment? It doesn't concern me while 
developing but it scares me to think the administrators may simply forget to 
turn on SSL and then LogIn details will float around not encrypted and the 
blame will find me somehow.

 

 

Thanks

Tom

 

 

 

On 27 November 2014 at 20:35, noonie neale.n...@gmail.com 
mailto:neale.n...@gmail.com  wrote:

Tom,

You can ignore all that stuff as it should have nothing to do with your web 
application.

It's a server thing when running behind IIS etc. and all the magic happens 
lower down the stack.

-- 
noonie

On 27/11/2014 4:20 pm, Tom P tompbi...@gmail.com 
mailto:tompbi...@gmail.com  wrote:

Noob question here.

 

How would I go about adding SSL to a MVC site? Is it simply a matter of turning 
a switch on in the server somewhere and the admins can do it or do things need 
to be done in code? I am reading a whole variety of ways such as adding 
attributes, filters, configuration settings, cookie properties, certificates 
and so on. Seems complicated. I was under the impression I could do without it 
in development and have it simply turned on once it goes live. Is this not 
the case?


 

 

Thanks

Tom

 

 



RE: [OT] Ultrabook for noob

2014-11-26 Thread Paul Glavich
Highly recommend the Gigabyte P34v2 (I bought mine from Afforable Laptops 
www.affordablelaptops.com.au http://www.affordablelaptops.com.au  )

 

Specs are:

*I7-4700HQ (Up to 3.4Ghz)

*16Gb memory

*256 Gb SSD

*Dedicated Gfx card (Nvidia Geforce GTX 860M with 2 or 4Gb GDDR memory 
– can’t remember) in addition to the Intel 4600 (switches to 4600 in battery 
mode)

*About 1.7Kg

*Battery life is great.

 

Its actually a gaming rig but very thin and light.

 

-Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Tom P
Sent: Thursday, 27 November 2014 3:00 PM
To: ozDotNet
Subject: Re: [OT] Ultrabook for noob

 

My thoughts exactly! :) I was thinking more about how long it normally lasts 
before i7 and 16Gb RAM become too little

 

Thanks

Tom

 

On 27 November 2014 at 14:56, Stephen Price step...@perthprojects.com 
mailto:step...@perthprojects.com  wrote:

I only appreciate mine for six months. After that I want a new one. ;)

 

On Thu, Nov 27, 2014 at 11:45 AM, mike smith meski...@gmail.com 
mailto:meski...@gmail.com  wrote:

ATO allows laptops to depreciate over 3 years, desktops 4

 

https://www.ato.gov.au/Individuals/Income-and-deductions/In-detail/Deductions-for-specific-industries-and-occupations/IT-professionals---claiming-work-related-expenses/?page=21

 

On Wed, Nov 26, 2014 at 5:50 PM, Tom P tompbi...@gmail.com 
mailto:tompbi...@gmail.com  wrote:

Hi Stephen

 

Thanks for the quick response. Actually a coworker suggested this list a while 
ago but I forgot all about it.

 

Surface Pro 3 did have me interested at first but it is too small in my opinion 
and I prefer to just use the laptop and not have to hook up to an external 
monitor and keyboard and so on. Even a 13 has me concerned. I may go with 15.

 

I've heard great things about the Macbook but the keyboard didn't feel right to 
me for Windows.

 

I'll check out the XPS 15.

 

Wow, 16Gb RAM? I didn't realise that was such an issue. 8Gb would be plenty for 
me I think but I guess going forward that will matter. How often do people 
change laptops? Is 3-4 years a stretch?

 

Thanks

Tom

 

 

On 26 November 2014 at 17:02, Stephen Price step...@perthprojects.com 
mailto:step...@perthprojects.com  wrote:

Welcome Tom!

(OMG where did we get a new poster from?)

 

Having more than a few laptops (both past and present) I feel slightly 
qualified to reply. I've found Dell pretty good, but always get the longest 
warranty you can get your hands on. It's happened a couple of times where a 
laptop has needed parts/repairs and its been out of warranty. When that happens 
its usually better to upgrade than spend money on it. 

 

I'm currently running a Mac book Pro 13 (for iOS dev cross platform stuff with 
Xamarin), a Surface Pro 3 (for most dev) and an Asus gaming laptop (amazing 
machine but a bit too heavy to lug about. Awesome for gaming at a mates place, 
or when others bring their laptops and you want to be sociable in the same 
room). 

The only thing that stops me from saying get a surface pro 3, is the RAM limit 
of 8Gb. If it could have 16Gb it would be the way to go, hands down. The other 
two laptops both have 16Gb and its really the only thing that lets the Surface 
Pro 3 down (spec wise). That said its the most portable, and most adaptable 
(laptop or tablet mode) and even wins on battery life by a huge margin. 

 

That said, the real answer is it depends. You need to look at what you want 
it for and makes sure whatever you get fits that first. Oh, I had a Samsung 
Ultrabook (the QuadHD touch screen one) and was disappointed with the high DPI 
experience of Windows 8. Passed it to my daughter for Uni laptop and she loves 
it. 

I almost got the Dell XPS 15 (with the QuadHD touchscreen) but got the surface 
pro 3 instead. So far not regretted that decision but I daresay the Dell would 
have also been a good buy (without the tablet form tho)

 

HTH

  
http://t.signaledue.com/e1t/o/5/f18dQhb0S7ks8dDMPbW2n0x6l2B9gXrN7sKj6v4LGzzVdDZcj8qlRZHN5w6vp0g4p7Cf96836-01?si=6200614728499200pi=19c727c0-0318-4e16-d9c8-6b4293530175
 

 

On Wed, Nov 26, 2014 at 12:55 PM, Tom P tompbi...@gmail.com 
mailto:tompbi...@gmail.com  wrote:

Hi

 

First time poster here so please take it easy on me.

 

I've only ever had a desktop but looking to purchase my first laptop, ultrabook 
preferred. I've been looking at the Dells for warranty and support feedback 
I've received, XPS 13 sounds mainly. I wish to use it for development mainly 
with some minor travel. Can some of the wiser more experienced developers here 
share their thoughts and recommendations?

 

Thanks

Tom

 

 





 

-- 

Meski


  
http://t.signaledue.com/e1t/c/5/f18dQhb0S7lC8dDMPbW2n0x6l2B9nMJW7t5XYg2z8MG4N4WJpKqQK4kWF2mHLdh7Wljf19pFfl03?t=http%3A%2F%2Fcourteous.ly%2FaAOZcvsi=6200614728499200pi=19c727c0-0318-4e16-d9c8-6b4293530175
 http://courteous.ly/aAOZcv


Going to 

RE: Any one here using the Visual Studio Online (TFS in the cloud)?

2014-08-03 Thread Paul Glavich
We use TFS online all the time. We had noticed that StackRank column is also 
missing from some of the forms. While you can include it to see what the values 
are, you will find these values change as TFS online uses that field now for 
internal ordering of backlog items.

 

Quite painful for us since we use a customised version of scrum where we 
actively attributed values to stack rank to assist in prioritisation of a huge 
backlog at various phases. One might argue we should not have a huge backlog 
but in our instance, this is not practical at all as stack rank was one of the 
few common fields with bugs and stories. At any rate, we have had to 
accommodate it by using the backlog ordering feature and a separate area for 
defects/bugs. The removal of stackrank has forced us to cleanup our backlog 
which is a good thing but we still need to retain a fair bit so it has caused 
us more administrivia pain. I often find that scrum/agile practices tend to 
work best in greenfields projects and the strictness that is advocated causes 
more friction for no gain in much longer running projects (Note: I am a 
scrum/agile advocate but I tend to customise each to suite the 
environment/team/needs).

 

I believe you can still access it if you use something like Excel to do your 
work item management but you are fighting the system and I would guess there 
may be times when that value gets overwritten internally by TFS online so it is 
a risky proposition.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Preet Sangha
Sent: Tuesday, 8 July 2014 3:45 PM
To: ozDotNet
Subject: Re: Any one here using the Visual Studio Online (TFS in the cloud)?

 

Thanks David. Will contact Grant directly.

 

On 8 July 2014 17:09, David Kean david.k...@microsoft.com 
mailto:david.k...@microsoft.com  wrote:

Grant says:

 

Can you have them contact me directly with their accountname.visualstudio.com 
http://accountname.visualstudio.com ?

 

There was an issue on Friday regarding backlog columns, we may have regressed 
something in some cases.

http://blogs.msdn.com/b/vsoservice/archive/2014/07/03/issues-with-visual-studio-online-work-item-backlog-management-2-7-investigating.aspx

 

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com  
[mailto:ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com ] 
On Behalf Of David Kean
Sent: Monday, July 7, 2014 9:43 PM
To: ozDotNet; Grant Holliday
Subject: RE: Any one here using the Visual Studio Online (TFS in the cloud)?

 

Grant?

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Preet Sangha
Sent: Monday, July 7, 2014 6:18 PM
To: ozDotNet
Subject: Any one here using the Visual Studio Online (TFS in the cloud)?

 

This morning all of us who logged into TFS (as opposed to were already logged 
in on sleeping machines) have had Stack Rank removed from the Task Work Item 
template.

 

Cloud TFS doesn't allow changes to the WIT as you can with an in house TFS 
server so I was wondering if any of the MS people here know if there was a 
change to TFS that could have made this happen for us, or is there something we 
need to specially to fix this?


 

It's not critical at the moment but I'm sure if is affects us it will affect a 
bazillion other people.

 

 

Thanks.

-- 
regards,
Preet, Overlooking the Ocean, Auckland 





 

-- 
regards,
Preet, Overlooking the Ocean, Auckland 



RE: [OT] Browser use

2014-05-26 Thread Paul Glavich
Interesting you note that only last month Chrome exceeded IE usage (even tho it 
is a Microsoft site). We have had consistently much larger percentage of chrome 
usage as in Chrome 52% vs Firefox 18% vs IE 14% which has been that way for a 
few years.

 

I too use Chrome for almost everything. The dev tools are good and being an 
AngularJS guy, it has Batarang 
(https://chrome.google.com/webstore/detail/angularjs-batarang/ighdmehidhipcmcojjgiloacoafjmpfk?hl=en
 ) for AngularJS dev.

 

I do leave IE the default for Win8 browsing tho as it seems to work nicer from 
an integration perspective.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of GregAtGregLowDotCom
Sent: Thursday, 22 May 2014 1:33 PM
To: 'ozDotNet'
Subject: RE: [OT] Browser use

 

I never thought I’d change my default browser but I had to do so with IE11. 
Moved to using Chrome as a default. I eventually I got to a point where I just 
had to get things done and whether or not Microsoft believe they did the right 
thing by removing the agent string entries, they “broke the internet” for too 
many people.

 

I don’t think they understood the politics of this. If someone is browsing ok, 
and updates to IE11 and then so many sites don’t work, they won’t blame the 
sites, they’ll blame what they just updated.

 

There were lots of Microsoft’s own properties that wouldn’t work with IE11.

 

You can’t even lodge a BAS return here in Australia with IE11 and if you talk 
to the ATO, they just say “Use Chrome”.

 

Wish it wasn’t so.

 

Last month was the first month where I noted more Chrome use than IE use at our 
web  site, and we’re a Microsoft related site. That’s a big change for us from 
6 months ago. It used to be an IE majority for us.

 

Apart from the lack of compatibility with so many existing sites, I quite like 
many things about IE11. I just can’t get work done when it’s my default browser.

 

Regards,

 

Greg

 

Dr Greg Low

 

1300SQLSQL (1300 775 775) office | +61 419201410 mobile│ +61 3 8676 4913 fax 

SQL Down Under | Web:  http://www.sqldownunder.com/ www.sqldownunder.com

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of anthonyatsmall...@mail.com 
mailto:anthonyatsmall...@mail.com 
Sent: Thursday, 22 May 2014 11:23 AM
To: 'ozDotNet'
Subject: RE: [OT] Browser use

 

Just want to spread my use of technology amongst many companies instead of a 
few…it also inspires competition and innovation.

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com  
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Stephen Price
Sent: Thursday, 22 May 2014 1:18 PM
To: ozDotNet
Subject: Re: [OT] Browser use

 

So you would rather support a small non powerful company that uses your data 
any way they want? 

 

The underdog so to speak? 

 

Personally, I'd rather use the browser that does everything I want it to and 
none of the things I don't. Targeted advertising? Sure I want that. I *WANT* to 
see ads for the latest and greatest Tablet or Monitor. I *don't* want to see 
tampon ads. Sign me up. Shut up and take my money!

 

If said company becomes an issue, I'll change. I'm a fickle customer, more so 
than they are. I'm using them more than they are using me. 

 

On Thu, May 22, 2014 at 10:25 AM, anthonyatsmall...@mail.com 
mailto:anthonyatsmall...@mail.com  wrote:

I use Firefox, Chrome is great but I do not want to support a company that is 
so powerful and use your data what ever way they want.

 

 

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com  
[mailto:ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com ] 
On Behalf Of Craig van Nieuwkerk
Sent: Thursday, 22 May 2014 12:06 PM
To: ozDotNet
Subject: Re: [OT] Browser use

 

I am pretty much with you. I find IE works better with VS so use it for most 
development unless I need to do a lot of client side debugging in which case I 
use Chrome. I then use Chrome for everyday use. I only use Firefox for cross 
browser testing.

 

Craig

 

On Thu, May 22, 2014 at 12:01 PM, Stephen Price step...@perthprojects.com 
mailto:step...@perthprojects.com  wrote:

I disagree. I think? 

 

I find I use Chrome and IE. For development it depends what I'm doing. If I 
want to hit a breakpoint in VS then IE does that. If I want to use the debugger 
in the browser then I use Chrome. IE keeps changing their Developer tools and 
even though they are improving I still find Chrome more productive for 
debugging. 

 

For actual USE I use Chrome for most things but occasionally something doesn't 
work right and I switch. Pluralsite for example seems to hang after a while in 
Chrome. No issues in IE. 

Not used Firefox in some years. Toggling between two is fine. A third becomes 
too much.

 

On Thu, May 22, 2014 at 9:53 AM, David Burstin david.burs...@gmail.com 
mailto:david.burs...@gmail.com  

RE: [OT] Laptop to replace macbook

2014-05-15 Thread Paul Glavich
I bought this Gigabyte P34G-V2 
(http://www.affordablelaptops.com.au/contents/en-us/d448_gigabyte-p34g-ultrablade-laptop-notebook.html
 )

 

The one on the link says 8G ram but I got a 16Gb ram model for $1700.

 

Core i7-4700HQ (8 logical cores)

Dedicated nvidia graphics 860GTX with 4Gb DDR5

Backlit keyboard

2 yrs warranty – on site

256Gb SSD

Its fast, quiet and light (1.7Kg approx.).

Backlit keyboard, nice screen

 

Really nice unit.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Bec Carter
Sent: Thursday, 15 May 2014 2:38 PM
To: ozDotNet
Subject: [OT] Laptop to replace macbook

 

Hi everyone

 

I recently had my 13 2012 macbook pro destroyed. 8GB RAM and 128GB solid 
state. Spent around $1700 when I bought it. Loved using it as was light, quick 
and just felt nice in my hands.

Now I have to replace it but I also need a new Windows laptop (13 also) for 
some coding so thought I could get something that would replace both.

 

Can anyone recommend something they've used or are using currently? Same size, 
weight if possible, ram and solid state is a must. Budget under 2k.

 

Cheers

Bec



Tester wanted

2014-05-12 Thread Paul Glavich
Hi all,

 

If you, or you know of someone who is a good tester, and is looking for a
new job, then we have opportunities here at Saasu (http://saasu.com and
https://secure.saasu.com/a) .

 

We are based in Sydney, overlooking hyde park right across from St.James
station. Saasu is Australia's leading online accounting solution and we are
growing.

We are in need of a good tester to work in our testing team.

Ideally they would be able to:

* Work independently

* Know Microsoft technologies and .Net

* Preferably can do a little coding too

* Know about and/or how to do automated functional testing (in
addition to manual)

* Happy to work in a small but dedicated and highly efficient team

* Be generally awesome

 

If you are interested or know someone who is, please send them my way. You
can use this email address or you can use p...@saasu.com
mailto:p...@saasu.com 

 

 

-  Paul Glavich

CTO Saasu.com



RE: Migrating TFS

2014-02-11 Thread Paul Glavich
We moved from a 3rd party hosted full TFS instance to TFS Online however we
only use the work items, not source control(I prefer mercurial/git).

 

It was a little painfull as we had used some customisation to
fields/templates.

 

However, it was *mostly* ok (if a little time consuming). I just got the
entire backlog into Excel. Did the same the TFS online, copy common fields
from one excel sheet to another, publish to TFS online. This got us an easy
80-85% there. Other stuff was customised or had some other weirdness we had
to look into but not too bad. We kept the old instance going while we did
some sanity checks and ensured all was ok.

 

BTW, TFSOnline is great. Love the web interface and use it instead of the VS
integration.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Stephen Price
Sent: 11 February 2014 6:57 PM
To: ozDotNet
Subject: Re: Migrating TFS

 

Grant,

I did a migration from a one TFS server to another and it was a horrible
experience. I don't recall the tool I used but I had the added complication
of using a different Template on the destination server and it was trying to
migrate loads of mismatching fields. The source control was ok and history
seemed to work. The work items were sketchy with lots not migrated. We ended
up keeping the old TFS server about in read only for reference.

 

Good job going to the cloud, I use Visual Studio online for my own stuff and
its brilliant. Shame they don't make it easier to migrate into. 

 

cheers,

Stephen

p.s. if you need help with it let me know ;)

 

On Tue, Feb 11, 2014 at 1:48 PM, Grant Maw grant@gmail.com
mailto:grant@gmail.com  wrote:

Thanks Anthony. We're not worried about work items, just source code and
history at this point, including branches. The TFS Integration Platform is
beavering away as I write this (trying it out on a test copy of the
project), telling me that 176 of 335 change groups have been migrated.

I guess I'll just let it run and see where it lands me.

 

On 11 February 2014 15:40, Anthony Borton antho...@enhancealm.com.au
mailto:antho...@enhancealm.com.au  wrote:

Hi Grant,

 

I moved a client with around 35 team projects from an on-premises TFS up to
Visual Studio Online using the TFS Integration Platform. I was pretty lucky
in that they only needed the source to go up and didn't have work items to
work about. The process was quite a bit more time consuming than I had
planned and it was a seemingly never-ending exercise in massaging settings
to get the source (with history) from each TP up to the cloud. A future TFS
2013 update should include a feature to help move data from VSO down to TFS
but I haven't heard if there is anything there to help go the other way.

 

Cheers

 

Anthony Borton

Senior ALM Trainer/Consultant

Visual Studio ALM MVP

Enhance ALM Pty Ltd

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
[mailto:ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
] On Behalf Of Grant Maw
Sent: Tuesday, 11 February 2014 3:07 PM
To: ozDotNet
Subject: Migrating TFS

 

Hi All

Has anyone moved from on-premises TFS to visual studio online? We have a
large solution, including branches, that needs to be pushed into the cloud
as soon as possible and I'd love to hear any war stories before I start.

I'm thinking about using the tool at http://tfsintegration.codeplex.com/.

Cheers

Grant

 

 



RE: [OT] Surface 2 OR Surface Pro 2

2014-01-14 Thread Paul Glavich
I bought my daughter a Surface 2 Pro for Xmas so she could use it for school 
(in year 9 this year) and I really like it. Zippy and nice enough form factor. 
If it came with i7 I might have picked one up for myself.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com] On 
Behalf Of Stephen Price
Sent: 02 January 2014 2:59 AM
To: ozDotNet
Subject: RE: [OT] Surface 2 OR Surface Pro 2

 

Given i have more tablets than i can poke a stick at, i think i can give you my 
take on it.

I have a desktop pc (my gaming rig) a surface RT (1 not 2), an asus tab RT, an 
asus gaming laptop, Samsung book 9 plus, dell venue 8.
Nexus 7, 10 and 4. And Nokia 1520. 

Use them all depending which is charged and what I'm doing... Favour the asus 
RT over surface, but to be honest i would not get Rt device. See dell venue 8. 
Best screen is Samsung (quad Hd is awesome, touch screen and great battery)
Love the venue 8, the size is right and i don't take laptop to work anymore, 
why Rt when you can get win 8.1  Love 6 phone (wp8 is good this time 
around).

Typed on my phone :)

If i had to choose just one device it would be the dell venue 8. Got VS premium 
installed on it and mouse, keyboard and external monitor! 

  _  

From: James Chapman-Smith mailto:ja...@chapman-smith.com 
Sent: ‎1/‎01/‎2014 6:44 PM
To: ozDotNet (ozdotnet@ozdotnet.com) mailto:ozdotnet@ozdotnet.com 
Subject: [OT] Surface 2 OR Surface Pro 2

Hi folks,

 

I've made the decision to purchase either a Surface 2 or a Surface Pro 2, but 
I'm completely torn as to which one to get.

 

I just want a tablet device that I can use when travelling, sitting on the 
couch, or in bed. I want a keyboard with trackpad/mouse so the touch cover is 
fine for both. I don't think battery life will be a big issue. I'm likely to 
get HDMI adapter for a Pro.

 

My choices are do I get a Surface 2 as my mobile computing device and use 
desktop PCs at home and work OR do I get a Surface Pro 2 and use it for both 
home and work?

 

Will a Surface Pro 2 really be a laptop/desktop replacement PC or is it a pain 
with the small screen and single USB port? Or is it just an overblown toy and 
I'll miss my laptop?

 

Can anyone give me any argument for one or the other?

 

Cheers.

 

James.



RE: SPAM-LOW Re: WCF service best practises

2013-02-04 Thread Paul Glavich
Nope. Remoting is a way to marshal objects across a boundary whether that be
appDomain (2 separate appDomains on the same machine) or a network boundary
(machine 1 to machine 2). It looks and operates very much like DCOM if that
helps, in that it appears that you have a reference to the same object on
either end. Security wise it is not so strong but it works well and security
can be implemented via it channel sink mechanism. It goes way back to .Net 1
and is embedded in the core framework. Back in .Net 1 days it was either
ASMX web services or remoting to get across machines. As already surmised,
it is not promoted as a communications or messaging strategy since it is
proprietary and quite low level from a framework perspective.

 

System.Net.EnterpriseServices is kind of a COM+/DCOM wrapper for .Net and
allowed things like easily exposing .Net components via COM+/DCOM (as a
ServicedComponent) through the component services manager (although you
don't have to do this) for use by .Net and non .Net clients alike (primarily
VB 6, C/C++ etc.). I can't say I have used this namespace in a while though
so memory is a little rusty. The component services manager also made it
easier to manage transaction scopes for components and monitor their use, in
particular distributed transactions. The component services manager is
actually quite powerful. You had a bunch of attributes in the namespace
which allowed participation in DTC trx negotiation. There is more which I am
sure others can highlight for you.

 

 

-  Glav

 

 

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Katherine Moss
Sent: Monday, 4 February 2013 5:11 PM
To: ozDotNet
Subject: RE: SPAM-LOW Re: WCF service best practises

 

Now, remind me.  Is System.Net.EnterpriseServices the same as
System.NET.Remoting?  

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Greg Keogh
Sent: Sunday, February 03, 2013 5:21 PM
To: ozDotNet
Subject: Re: SPAM-LOW Re: WCF service best practises

 

Apparently the .NET remoting documentation has been removed and you have to
hunt around in the archives for it now (I haven't looked myself), so that's
probably a hint about being out-of-date. However, I have a sentimental
feeling for remoting as we have an intensely used client-server app out
there that will have its 10th birthday later this year, so by the date you
can tell it started in Framework 1.0 with Remoting. A newer app from last
year uses WCF and despite the extra work it gives us no particular advantage
and it works just the same. If don't need all the hyped flexibility and
generalisation that WCF give you then it doesn't contribute much.

 

If you just want two .NET app ends to talk over tcp or pipe with minimal
configuration or code bloat then remoting is still viable. I have a tiny
utility project with minimal remoting server and client classes that I throw
into a project if I quickly need two things to communicate. However, there
is little need for it lately as loading stuff into an AppDomain and talking
via a proxy is easier, and guess what ... it uses remoting internally to
talk between AppDomains. So remoting isn't dead, it's just gone into hiding.

 

Greg



RE: SPAM-LOW Re: WCF service best practises

2013-02-03 Thread Paul Glavich
 I thought that .net remoting is kind of out-of-date

It kinda is and my mention at it was a failed attempt at some humour. Sorry
bout that.

 

As to your OSS project, I would hate to discourage you in any way and I
don't know enough what you plan to do to recommend a particular technology.
WCF is a very capable tech, and as some have already mentioned, can be a
little daunting. I would be looking at your ideal way of exposing your
information, who would consume it, and how they would typically consume
this, and base your decision on that. Talking to AD is usually (in my
experience) not done via WCF but through LDAP or something like that. WCF
(and other similar technologies) are more around exposing of services or
even components. I was writing an information consumption app a while back
(one of the million pet projects I nearly completed) that used WCF to allow
plugging in of any type of module to consume any type of information simply
by implementing an interface so its definitely a reasonable choice.

 

-  Glav

 

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Katherine Moss
Sent: Monday, 4 February 2013 5:25 AM
To: ozDotNet
Subject: RE: SPAM-LOW Re: WCF service best practises

 

I thought that .net remoting is kind of out-of-date and shouldn't be used
because of all of the new technologies.  Or is it out-of-date?  And now
you're making me question my plans for my first Open Source project that I
was going to put up in a year or two.  I was going to have it use WCF hosted
as a Windows Service to talk to some other modules like ADLDS and stuff like
that but now I should maybe rethink Project Jenks.  Or is WCF appropriate
for talking to things like that?  Such as, one of my ideas for Jenks (it is
going to be a bunch of things; modules that one can plug into a single
interface as needed), is to create a sort of contact-management interface
that links to both my web site (or any web site for that matter), and to
ADLDS.  And for the ADLDS part, I would have it's directory access module
piggy-back on Microsoft's provided web service interface.  PowerShell
already uses it, but why not create another interface other than ADSIEdit
for managing ADLDS  too?  So hopefully in the coming months and year, you
should see something on CodePlex if I can ever get all of this backlog on
learning programming out of the way.  I've been so busy and have had to give
up some things for now due to prioritization. Programming is secondary to me
at the moment, so it's more important that my Microsoft certification
process gets underway.

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Paul Glavich
Sent: Friday, February 01, 2013 10:10 PM
To: 'ozDotNet'
Subject: RE: SPAM-LOW Re: WCF service best practises

 

At the risk of being argumentative, we asked for this. Maybe not you or me
specifically, but the community at large has. I agree the number of
technologies at play, particularly in this space is large but it makes it
all the more *interesting* to make those architectural choices. In some
ways, less choice is better as the number of possibilities and combinations
are less, thus decisions are more constrained and easier to get to.

 

However, the flexibility afforded to us now is great. The better
technologies will rise, the lesser ones either improved, integrated or
discarded and this is our task. In a properly architected system, the risk
of choice of a communications technology can be mitigated. However, we are
also human and can introduce dependencies where in hindsight, this was a bad
thing. We live and learn. It goes back to the circle of dev life
previously mentioned. Never believe the hype. Accept it for what it is,
experience it, come to an informed decision based on that, and your educated
judgement. Remember, .Net remoting is still there :)

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of Stephen Price
Sent: Saturday, 2 February 2013 11:52 AM
To: ozDotNet
Subject: Re: SPAM-LOW Re: WCF service best practises

 

I must be getting old too Greg. Your rants are starting to make sense. I'm
even nodding my head as I read. 

 

I've said it before, they invent this stuff faster than anyone can learn it.
Lets hope its heading in the right direction. For the children's sake.

 

On Sat, Feb 2, 2013 at 7:15 AM, Greg Keogh g...@mira.net
mailto:g...@mira.net  wrote:

Folks, I'm pleased to see that other people here are irritated by the number
of choices we have for communication and by the complexity of WCF. I was
also pleased to see someone else was bewlidered by having WebAPI buried
inside MVC and found a way of starting with a managable skeleton project.

 

Luckily I can delay my confusion over using WCF or whatever else is trendy
this week, as the core working code of my service is actually

RE: New Web API project

2013-02-01 Thread Paul Glavich
I believe it is an artefact of wanting to enable SPA (Single Page
applications). That is, a web app, using mostly a single page, comprised of
a lot of javascript calls to a WebApi backend.

 

It will be rationalised soon enough I believe.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Greg Keogh
Sent: Friday, 1 February 2013 9:39 AM
To: ozDotNet
Subject: New Web API project

 

I thought I'd generate a fresh Web API project from the template to see what
it looks like, and I eventually find it under ASP MVC 4. I do so and get 143
files in 20 folders. There are scripts, images, cshtml files, style sheets
and lord knows what. What is all this sh*t just to make a http service?

 

I expected to get a concise little project with some skeleton files but I
get this gigantic schmozzle. Are you telling me that a Web API project is
wrapped-up inside the circus of the ASP MVC framework?

 

Am I expected to delete all the irrelevent files and strip it back to a
simple service without a UI, or is there a simpler way of creating a Web API
project from scratch?

 

Greg

 

P.S. Silverlight still works best with WCF. You could make http requests
from Silverlight, but it's all typeless. I'm not sure if there is some trick
to make them work together more pleasantly.



RE: SPAM-LOW Re: Web api

2013-02-01 Thread Paul Glavich
HttpClient as already suggested but, the framework does suffer from a myriad
of choices (mostly due to historical choices). HttpWebRequest can do it too,
any number of proxies as well. Or you can go lower level but I would suggest
getting familiar with HttpClient. That way of working is the way things will
get done in the future (imho).

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Heinrich Breedt
Sent: Friday, 1 February 2013 8:08 PM
To: ozDotNet
Subject: SPAM-LOW Re: Web api

 

.net client: httpclient
Webpage: ajax get/post 

On 1 Feb 2013 18:16, Stephen Price step...@perthprojects.com
mailto:step...@perthprojects.com  wrote:

Really? the only way to call Rest API's is with a third party add on?

 

I was kind of looking for the out of the box way. But will have a look at
RestSharp, always handy to know whats out there. They invent this stuff
faster than anyone can learn it all. :)

 

On Fri, Feb 1, 2013 at 2:53 PM, Mark Thompson matho...@internode.on.net
mailto:matho...@internode.on.net  wrote:

You might like to try something like RestSharp ( http://restsharp.org/ ) -
it has some very nice helpers for adding request parameters and additional
headers. I haven't used it extensively, but for the times I have used it, it
made the whole process pretty painless.

 

Regards,

Mark.

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
[mailto:ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
] On Behalf Of Stephen Price
Sent: Friday, 1 February 2013 4:38 PM
To: ozDotNet
Subject: Web api

 

Hey all,

 

While we are on the subject of MVC, I was looking about for an example or
walkthrough of how you might call a Rest Web API from an MVC app. 

 

Not found much so far. I found a console C# app that uses the Asp.Net Web
API Client libraries to call one. I've also found some examples of how to
write the Web API's using MVC. 

 

So am scratching my head.. what httpX namespace is the right one to use?
HttpClient? something else? 

 

cheers,

Stephen

 



RE: SPAM-LOW Re: WCF service best practises

2013-02-01 Thread Paul Glavich
Webapi is a reaction to attitudes as described below.

 

People were foregoing WCF due to complexity and a variety of other reasons.
MVC was being used (with a bit of code) to produce simple JSON/XML Rest
api's. The team took this onboard, altered their view of world as they were
writing the Web Web Api and thus we have WebApi. As with most things,
simplicity wins out and WebApi was the framework attempt at that.

 

Ofcourse WCF can do REST too, you just have to twiddle a few hundred
different knobs on the right way. I would argue WCF is not bullshit. WCF
comprises way more than REST and has a very good feature set (albeit with a
some implementation flaws). Support for MSMQ, TCP, ServiceBus etc. all via
config is no easy feat.

 

 you want to do an intermittently connected app then use some sort of
message queuing framework/system or roll your own 

And the circle of development life continues...

 

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of David Connors
Sent: Friday, 1 February 2013 9:10 AM
To: ozDotNet
Subject: SPAM-LOW Re: WCF service best practises

 

On Fri, Feb 1, 2013 at 7:23 AM, Craig van Nieuwkerk crai...@gmail.com
mailto:crai...@gmail.com  wrote:

ASP.NET http://ASP.NET  WebAPI seems to be the new hotness. I don't have
much experience with WCF, but everyone I talk to says it is too heavy and
complicated. WebAPI tries to simplify things.

 

WCF is a bunch of bullshit. People who use it just do so for the sake of
adopting some shiny new technology - it is just pointless middleware for the
sake of it. I don't understand why it exists anyway - as if we are some day
doing to need to re-platform off tcp any time soon.

 

If I needed to do a lot of IPC stuff today I'd just use rest/json like
everyone else on the Internet. If you want to do something screaming fast,
use protobufs. If you want to do an intermittently connected app then use
some sort of message queuing framework/system or roll your own. I don't know
why a common API needs to sit on top of a bunch of unrelated use cases,
doing none of them very well.

 

$0.02. 

 

-- 

David Connors

da...@connors.com mailto:da...@connors.com  | M +61 417 189 363

Download my v-card: https://www.codify.com/cards/davidconnors

Follow me on Twitter: https://www.twitter.com/davidconnors

Connect with me on LinkedIn: http://au.linkedin.com/in/davidjohnconnors



RE: SPAM-LOW Re: WCF service best practises

2013-02-01 Thread Paul Glavich
I actually fully agree. I have been in the industry for a while now as well
and have seen the circle of dev life :) Kinda like clothes….. one day, those
blue spandex shorts will come back into fashion :)

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Davy Jones
Sent: Friday, 1 February 2013 9:49 PM
To: ozDotNet
Subject: Re: SPAM-LOW Re: WCF service best practises

 

I must be getting old.

 

XML-rpc  (simple XML http)

XML-soap complex ish

Wcf. Really complex

Rest/Jason complex ish

Web API simple 

 

A full turn of the wheel in 12 years. 

 

I get a new intern on my team next week, I wonder what new ideas he will
bring. Maybe flat text config files? 

 

Davy the Older



Sent from my starfleet datapad.


On 1 févr. 2013, at 10:45, Paul Glavich subscripti...@theglavs.com
mailto:subscripti...@theglavs.com  wrote:

Webapi is a reaction to attitudes as described below.

 

People were foregoing WCF due to complexity and a variety of other reasons.
MVC was being used (with a bit of code) to produce simple JSON/XML Rest
api’s. The team took this onboard, altered their view of world as they were
writing the Web Web Api and thus we have WebApi. As with most things,
simplicity wins out and WebApi was the framework attempt at that.

 

Ofcourse WCF can do REST too, you just have to twiddle a few hundred
different knobs on the right way. I would argue WCF is not bullshit. WCF
comprises way more than REST and has a very good feature set (albeit with a
some implementation flaws). Support for MSMQ, TCP, ServiceBus etc… all via
config is no easy feat.

 

 you want to do an intermittently connected app then use some sort of
message queuing framework/system or roll your own 

And the circle of development life continues…..

 

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com mailto:ozdotnet-boun...@ozdotnet.com
[mailto:ozdotnet-boun...@ozdotnet.com] On Behalf Of David Connors
Sent: Friday, 1 February 2013 9:10 AM
To: ozDotNet
Subject: SPAM-LOW Re: WCF service best practises

 

On Fri, Feb 1, 2013 at 7:23 AM, Craig van Nieuwkerk crai...@gmail.com
mailto:crai...@gmail.com  wrote:

ASP.NET http://ASP.NET  WebAPI seems to be the new hotness. I don't have
much experience with WCF, but everyone I talk to says it is too heavy and
complicated. WebAPI tries to simplify things.

 

WCF is a bunch of bullshit. People who use it just do so for the sake of
adopting some shiny new technology - it is just pointless middleware for the
sake of it. I don't understand why it exists anyway - as if we are some day
doing to need to re-platform off tcp any time soon.

 

If I needed to do a lot of IPC stuff today I'd just use rest/json like
everyone else on the Internet. If you want to do something screaming fast,
use protobufs. If you want to do an intermittently connected app then use
some sort of message queuing framework/system or roll your own. I don't know
why a common API needs to sit on top of a bunch of unrelated use cases,
doing none of them very well.

 

$0.02. 

 

-- 

David Connors

da...@connors.com mailto:da...@connors.com  | M +61 417 189 363

Download my v-card: https://www.codify.com/cards/davidconnors

Follow me on Twitter: https://www.twitter.com/davidconnors

Connect with me on LinkedIn: http://au.linkedin.com/in/davidjohnconnors



RE: SPAM-LOW Re: WCF service best practises

2013-02-01 Thread Paul Glavich
Your kinda missing the point. Abstracting is not just so you can swap out
another tech in its place. In fact, that aspect of abstraction is somewhat
of a practical fallacy but that is another thing.

 

Working with WCF and MSMQ is quite easy.

Using the same principles, you can work with TCP without having to know
strictly about TCP socket inner workings (although granted it certainly
helps)

Working with HTTP and WCF is also relatively easy and while WCF can do REST,
I would never use it for that. However, SOAP, and the WS-* stack are pretty
full featured and not so easy to implement. WCF hides a huge amount of that
complexity, one of the major goals of an abstraction. Your specific example,
while valid in your scenario, is specific and does not negate its benefits.
I have used WCF in many ways in enterprises with great success. I believe
there is also a set of relay bindings for the Azure service bus too so you
get to use those same concepts again.

 

Having said all that, if you don't like it, don't use it. Perhaps you chose
the wrong tech in the first place?

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of David Connors
Sent: Saturday, 2 February 2013 9:28 AM
To: ozDotNet
Subject: Re: SPAM-LOW Re: WCF service best practises

 

 

On Fri, Feb 1, 2013 at 7:44 PM, Paul Glavich subscripti...@theglavs.com
mailto:subscripti...@theglavs.com  wrote:

Webapi is a reaction to attitudes as described below.

 

[ ... ]

 

Ofcourse WCF can do REST too, you just have to twiddle a few hundred
different knobs on the right way. I would argue WCF is not bullshit. WCF
comprises way more than REST and has a very good feature set (albeit with a
some implementation flaws). Support for MSMQ, TCP, ServiceBus etc. all via
config is no easy feat.

 

 you want to do an intermittently connected app then use some sort of
message queuing framework/system or roll your own 

And the circle of development life continues...

 

Abstractions are great when they exist for a genuine need. I'd much rather
write a computer game against DirectX than whatever nightmare you normally
use to talk to GPUs in assembly. I'd rather write a business app in terms of
SQL than managing structures on disk myself. The benefit of the scenario
with the abstraction vs without is decisive. 

 

However when abstractions are thought up by some egg heads in response to a
need that doesn't exist, then you end up with something like WCF. The cure
is worse than the disease. 

 

In the case of the two examples I provided above, the leverage you get from
the abstraction is clear. In the case of WCF abstracting rest - why? Even if
you wrote your REST calls directly using sockets, it is STILL trivial. If I
needed an intermittently connected app to use MSMQ, I could use MSMQ. How
many times have you heard of anyone writing an app and then after it is done
going oh shit, we need to change the app to use SMTP instead of TCP
sockets. I mean WTF. Seriously. It is an abstraction that doesn't
abstraction for added productivity or anything else. 

 

Case in point. I recall helping the developer hired by one of our clients
get a WCF service up and running on some of our infrastructure. We spent a
couple of days beating our heads against why it was failing before finally
determining that WCF assumes the existence of an HTTP host header when
generating some of its own internal URLs (which is not normally present in
on a fixed dedicated IP brinding). Failing that it reverted back to using
the machine's netbios machine name in some of its internal comms, which was
failing (again, in a high end hosting scenario the internal machine name is
never going to resolve on the Internet and normally sit because an app
accelerator or firewall etc) . Completely retarded behaviour and purely in
existence in and of and because of the abstraction itself. I could have
written all of the end-point communication for the project myself in less
time than it took to resolve that one issue.

 

In the debrief with the client:

changed to protect the guilty says:

 I see what you mean when talking about how WCF can be difficult to work
with. It's been a huge learning experience for me.

David Connors @ codify.com http://codify.com  says:

 It is just too hard for what it does

changed to protect the guilty says:

 true

 

-- 

David Connors

da...@connors.com mailto:da...@connors.com  | M +61 417 189 363
tel:%2B61%20417%20189%20363 

Download my v-card: https://www.codify.com/cards/davidconnors

Follow me on Twitter: https://www.twitter.com/davidconnors

Connect with me on LinkedIn: http://au.linkedin.com/in/davidjohnconnors



RE: SPAM-LOW Re: WCF service best practises

2013-02-01 Thread Paul Glavich
At the risk of being argumentative, we asked for this. Maybe not you or me
specifically, but the community at large has. I agree the number of
technologies at play, particularly in this space is large but it makes it
all the more *interesting* to make those architectural choices. In some
ways, less choice is better as the number of possibilities and combinations
are less, thus decisions are more constrained and easier to get to.

 

However, the flexibility afforded to us now is great. The better
technologies will rise, the lesser ones either improved, integrated or
discarded and this is our task. In a properly architected system, the risk
of choice of a communications technology can be mitigated. However, we are
also human and can introduce dependencies where in hindsight, this was a bad
thing. We live and learn. It goes back to the circle of dev life
previously mentioned. Never believe the hype. Accept it for what it is,
experience it, come to an informed decision based on that, and your educated
judgement. Remember, .Net remoting is still there :)

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Stephen Price
Sent: Saturday, 2 February 2013 11:52 AM
To: ozDotNet
Subject: Re: SPAM-LOW Re: WCF service best practises

 

I must be getting old too Greg. Your rants are starting to make sense. I'm
even nodding my head as I read. 

 

I've said it before, they invent this stuff faster than anyone can learn it.
Lets hope its heading in the right direction. For the children's sake.

 

On Sat, Feb 2, 2013 at 7:15 AM, Greg Keogh g...@mira.net
mailto:g...@mira.net  wrote:

Folks, I'm pleased to see that other people here are irritated by the number
of choices we have for communication and by the complexity of WCF. I was
also pleased to see someone else was bewlidered by having WebAPI buried
inside MVC and found a way of starting with a managable skeleton project.

 

Luckily I can delay my confusion over using WCF or whatever else is trendy
this week, as the core working code of my service is actually inside a
neutral DLL. I can write and test this code totally independly of how it
will be published, then later I can wrap it in thin code to publish it in
whatever ways I want. That will give me time to fiddle around with Web API.

 

Overall though, I'm getting utterly fed-up with the number of technologies,
kits, standards, languages, scripts, dependencies, conventions, platforms,
etc. Every month I get the MSDN magazine posted to me and I dread opening it
to see how many dozen new acronymns have been invented and discover how all
of my old apps are obsolete because there is a new and better things to do
it.

 

I must be getting old too, as I pine for the previous decades of programming
where there was less choice and everything just goddamn worked and was
documented. Now I spend whole days futzing around to try something out or
desperately searching the Internet for clues on an incomprehensible errors.
There was a time when you could feel good as being a well-rounded programmer
with good general knowledge. These days it's practically impossible to be
well-rounded in every significant aspect of programming without
experimenting and studying 18 hours every day and skipping eating and
bathing. It's like trying to understand every working part of a Jumbo Jet.
Instead of converging and stabilising in modern times, software development
is disintegrating into a jumble of parts, of which many are nearly
duplicated, conflicting, poorly documented, unstable, overly-complex,
inter-dependent and multi-versioned. I'm finding that the joy of computer
programming is being sucked out of me week by week. The thing that sh*ts me
most is what came out of the discussion weeks ago about how there is no
single reliable way of writing multi-platform software. To do that you have
to be boffin of C#, C, C++, JavaScript, Java and all of their supporting
kits.

 

Oh well, back to Silverlight 4 coding this morning ... and that's nearly
obsolete already!!

 

Greg

 



RE: Website request slow performance / timeouts

2011-05-25 Thread Paul Glavich
A few things you can look at are:

 

· Is staging a 32 bit environment or is everything all 64bit? If
memory requirements are not large you can look at running IIS in 32bit mode
in the VM’s which can yield good CPU utilisation benefits. Generally, VM’s
suck IMHO compared to physical. I have personally used 32bit mode IIS within
a 64bit VM and seen very good CPU improvement.

· Do you use stored procedures and if so, do many of the stored
procedures have conditional logic in there that cause different SQL
statements to be executed based in differing input parameters (not talking
about constructing SQL strings and using sp_executesql tho)? This kind of
thing can cause SQL to use incorrect query plans for a proc and cause really
long execution times of that proc until it is recompiled/query plan removed
from cache.

· Look at the Asp.NET request execution time counter to see if
ASP.NET itself is taking a long time to process any requests. If not, then
you know its outside of the ASP.NET app space. If yes, it could be doing
either a long bit of processing or waiting on something like a SQL query to
execute

· Also look at Asp.NET Application Restart and worker process
restarts. App restarts may be causing app recompilation and thus making
requests take longer.

· I assume you have looked at the memory counter to ensure no memory
leaks are present (ever increasing memory usage). If you suspect this, you
may also want to look at the GC counts in the Asp.Net Memory counters to
ensure an excessive amount of garbage collection is not holding up
processing (although u would see this on staging as well).

 

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Michael Lyons
Sent: Tuesday, 24 May 2011 2:29 PM
To: 'ozDotNet'
Subject: Website request slow performance / timeouts

 

I’ve been working on an ASP.Net solution which has a slow performance issues
and it has got me baffled.

 

Problem:

The production server randomly slows down when serving asp.net requests and
even times out. 

 

System architecture:

The solution is hosted on a dedicated box which is running VmWares ESXi with
4 VM servers sitting on it (1 per core). Each VM is on its own network.

All network communication is done through a dedicated hardware firewall,
even between VM’s (unfortunately the auditor has to have it this way).

Database is on 1 VM while another has the web server.

ASP.Net is v4 running on IIS 7.5 while database is SQL Server 2008R2 all on
top of Windows Server 2008 R2

 

Analysis to date:

I’ve run a profiler over the solution and so far come up with nothing that
really needs to be optimised.

Our staging environment is running the same way as our production system
architecture minus the hardware firewall and has a lot lower hardware specs
but performs better than the production environment. When I’m talking
slower, I’m talking ¼ of the memory and a 7 year old CPU.

Production IIS logs show some randomly high request execution times.

 

Theories to date:

ESXi is doing something weird and causing VM’s to run slow.

Firewall is blocking requests randomly or is having performance issues,
although I don’t see it.

IIS is randomly running slow.

Sql Server is randomly running slow

 

 

My questions:

What would Windows performance counters would you watch? Besides the typical
CPU, Disk, memory and ASP.Net 4.0 counters?

Does the IIS logs request execution times include the time to send the
network data? Eg. From time of socket open to time of socket closed? Or is
it just the pipeline without the TCP time included – eg. Serving a straight
html file would just really be time to read the file from disk.

What else would you look at?

 

 

--

Michael Lyons



RE: SPAM-LOW Re: Project planning/issue tracking/etc, how are you doing it?

2011-02-15 Thread Paul Glavich
We use Rally http://www.rallydev.com/

We are currently on the community edition as we have a small team but will
be branching out into multiple projects. The community edition doesn't
support this, but I believe you can upgrade to the paid version to get it.

The company used to use PivotalTracker which I found pretty poor. I have
also looked at AgileZen which also seems quite rudimentary. Rally is like a
nice in between of TFS and PivtialTracker/AgileZen. Its pretty comprehensive
and has a decent UI.

TFS is good but is behemoth of ugliness. Reporting is its strong point. The
MSF agile templates are pretty average though.


- Glav

-Original Message-
From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Scott Barnes
Sent: Tuesday, 15 February 2011 5:38 PM
To: ozDotNet
Subject: SPAM-LOW Re: Project planning/issue tracking/etc, how are you doing
it?

Like all developers - I'm writing one! hehe.. I've been tinkering with one
called iWANT basically taking the scraps left over from Agile methodology,
cram in Sketchflow and mix it up with a whole spoon full of Dr Zoidburg from
Futurama and well ...iWANT is what i'm ended up with so far.

The reality though from traveling around Australia for the past few years
meeting various ppl on this list, their cubicles blah blah, I've yet to see
a team really adopt Agile end to end for one (seems to be a scrap yard of
cherry picks) and between Excel and TFS are pretty much the issue tracking
backbone of Australian .NET development :) (Not that i see anything really
wrong with that - each to their own i say).

Here's a screeny of it so far -
http://www.flickr.com/photos/mossyblog/5405961003/in/set-72157600318083654/l
ightbox/

I'll finish it one day but yeah, thats where i'm kind of at personally
- mostly its just excel spreedsheets that live in some govt employee's inbox
atm :( (current gig).

P.S
Why ZoidBurg? well given most ppl get religious about this topic ..i figure,
why not worship someone like him if you're gonna form a religion..base it
around a man-like crustacean with an insatiable appetite and an adorable
I'm helping! personality :)

---
Regards,
Scott Barnes
http://www.riagenic.com



On Mon, Feb 14, 2011 at 1:29 PM, Noon Silk noonsli...@gmail.com wrote:
 As many of you will know, I used to use trac for this, but I'm moving 
 (well, for some things) to JIRA. They also have GreenHopper to do the 
 Agile component; namely, planning boards and task boards and so on.
 Now, it's arguably useful, but, there is, IMHO, a significant and 
 bizarre flaw in that you can't do this *across projects* (unless I am 
 mistaken).

 Atlassian claim they'll be providing this soon:
 http://jira.atlassian.com/browse/GHS-1800 but incase they don't 
 actually deliver on that; and even anyway, I'm wondering how other 
 people do this. Is there some plugin for some software you are using?
 Is there some better strategy for this?

 Interested in thoughts.

 --
 Noon Silk

 http://dnoondt.wordpress.com/  (Noon Silk) | 
 http://www.mirios.com.au:8081 

 Fancy a quantum lunch?
 http://www.mirios.com.au:8081/index.php?title=Quantum_Lunch

 Every morning when I wake up, I experience an exquisite joy — the joy 
 of being this signature.





RE: NuPack

2010-10-08 Thread Paul Glavich
I like the fact that .Net finally has a package manager and like Mitch said,
built into Visual Studio which is a key point.

 

I also think it will work really nicely in corporate environments where you
can setup  a package repository location and all those tools, libraries and
whatever that are created for internal use, can be searched and imported
easily. In large or disparate orgs, its easy for each department or team to
have Library X where other departments can replicate that functionality in
their own library.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of David Burela
Sent: Friday, 8 October 2010 4:09 PM
To: ozDotNet
Subject: Re: NuPack

 

I like the examples that Scott Hansleman has on his blog. Like adding
EF4-CTP4 and CompactSQL.

 

Me, I'll be very happy when things like the Silverlight Toolkit is able to
be added seamlessly like this. And other small utils that I use like
WP7Performance.dll

 

But I could solve this by myself.

Another feature NuPack allows is for you to pull things in from the main
package repository from Microsoft, but you can also specify a foldershare on
your network to act as a package repository.

You can also create packages really easily. It is just a .zip folder with a
6 line XML metadata file.

 

What I might end up doing is taking those small library .dlls that I use
regularly (WP7 controls, linq extensions, etc.)., create my own packages of
them, and put them onto a network share.

 

Taking it further, I could create a small powershell script called Standard
WP7 tools.ps

Add-Package WP7Performance

Add-Package UsefulLinqExtensions

Add-Package ninject

etc.

 

That way when I create a new project I can enter that single powershell
command and have my project boot strapped with my standard libraries.

 

The fact that it is all powered by powershell could lead to some interesting
ways of playing with it / setting up projects.

-David Burela

On 8 October 2010 15:50, Winston Pang winstonp...@gmail.com wrote:

I like the idea, although it's not new, since ruby had the same concept and
perl/cgi did as well I think. Anyways, it's good to see that they're
integrating it with VS now. Certainly makes me more of a lazier person, and
I can spend more time on trying new libraries than figuring out what the
hell to inject in my config files.

 

On Fri, Oct 8, 2010 at 3:47 PM, Mitch Denny mitch.de...@readify.net wrote:

I'm really happy to finally see a package manager for .NET that will end up
embedded in Visual Studio (this is the key for me). It means it has a chance
of getting off the ground.

Regards
Mitch Denny
Readify | Chief Technology Officer
Suite 408 Life.Lab Building | 198 Harbour Esplanade | Docklands | VIC 3008 |
Australia
M: +61 414 610 141 | E: mitch.de...@readify.net | W: www.readify.net

The content of this e-mail, including any attachments is a confidential
communication between Readify Pty Ltd and the intended addressee and is for
the sole use of that intended addressee. If you are not the intended
addressee, any use, interference with, disclosure or copying of this
material is unauthorized and prohibited. If you have received this e-mail in
error please contact the sender immediately and then delete the message and
any attachment(s).



-Original Message-
From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of silky
Sent: Friday, 8 October 2010 3:41 PM
To: ozDotNet
Subject: Re: NuPack

On Fri, Oct 8, 2010 at 3:17 PM, David Burela david.bur...@gmail.com wrote:

[...]

 What does everyone think of it?

I guess I don't even really understand the problem. Why aren't all your
references in a \lib folder in the same directory as the rest of your
solution?

--
silky

http://dnoondt.wordpress.com/

Every morning when I wake up, I experience an exquisite joy - the joy of
being this signature.

 

 



RE: usercontrol caching

2010-09-08 Thread Paul Glavich
You can actually use standard ASP.Net tracing (via the trace.axd URL) to see
the difference in overall page processing times AND you will be able to see
references to the static content in the control tree as part of the trace
response.

 

Tools like ANTS Profiler from red gate (http://bit.ly/asuuSh ) will show you
the real value of performance gain and the Visual Studio perf tools will
also do this.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Anthony
Sent: Tuesday, 7 September 2010 10:57 PM
To: 'ozDotNet'
Subject: RE: usercontrol caching

 

Thanks Paul..that does answer my question?  Sugest any tool to use to see
the performance difference when i implement caching.  Would be interesting
to measure the difference in performance!

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Paul Glavich
Sent: Tuesday, 7 September 2010 10:31 PM
To: 'ozDotNet'
Subject: RE: usercontrol caching

 

Not sure I fully understand your question but, while the cache condition is
satisfied, none of your code around that user control code behind will be
executed (which is exactly why the cache is a good thing from  a perf
perspective)

 

ASP.Net takes the HTML result of that user control and stores it *verbatim*
while the cache condition is satisfied. While its being served from cache,
ASP.Net will take that output and simply push it back as the response
without executing any server side code (as far as that control is
concerned).

 

The answer (if I understand what you are asking) as to reduce the cache time
or Vary the cache instance by a param other than UserId.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Anthony
Sent: Tuesday, 7 September 2010 5:17 PM
To: 'ozDotNet'
Subject: usercontrol caching

 

I have a user control  which is set to output cache using the following
command..

 

%@ OutputCache Duration=1000 VaryByParam=UserId %

 

When the usercontrol is being cached the usercontrol is no longer availble
so my code where ... usercontrol.userid=89   fails!

 

Do i just wrap this code in 

 

If usercontrol isnot nothing then

usercontrol.userid=89   

End if

 

Or is there a 'proper' way to do this?

 

Is your website http://www.intellixperience.com/signup.aspx  being
IntelliXperienced?  | www.yougoingmyway.com ?
regards
Anthony (*12QWERNB*)

Is your website being IntelliXperienced?

 

 



RE: usercontrol caching

2010-09-07 Thread Paul Glavich
Not sure I fully understand your question but, while the cache condition is
satisfied, none of your code around that user control code behind will be
executed (which is exactly why the cache is a good thing from  a perf
perspective)

 

ASP.Net takes the HTML result of that user control and stores it *verbatim*
while the cache condition is satisfied. While its being served from cache,
ASP.Net will take that output and simply push it back as the response
without executing any server side code (as far as that control is
concerned).

 

The answer (if I understand what you are asking) as to reduce the cache time
or Vary the cache instance by a param other than UserId.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Anthony
Sent: Tuesday, 7 September 2010 5:17 PM
To: 'ozDotNet'
Subject: usercontrol caching

 

I have a user control  which is set to output cache using the following
command..

 

%@ OutputCache Duration=1000 VaryByParam=UserId %

 

When the usercontrol is being cached the usercontrol is no longer availble
so my code where ... usercontrol.userid=89   fails!

 

Do i just wrap this code in 

 

If usercontrol isnot nothing then

usercontrol.userid=89   

End if

 

Or is there a 'proper' way to do this?

 

Is your website http://www.intellixperience.com/signup.aspx  being
IntelliXperienced?  | www.yougoingmyway.com ?
regards
Anthony (*12QWERNB*)

Is your website being IntelliXperienced?

 

 



RE: SPAM-MED: Re: IIS Express

2010-06-30 Thread Paul Glavich
FWIW, I did install those extensions and they have worked very well.

 

On one installation however, they did crash my VS instance when trying to
connect to a TFS Server. There is a simple registry fix for this which I
have applied and everything works well again. I can't say I have seen any
degradation in VS performance as a result.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Geoff Appleby
Sent: Wednesday, 30 June 2010 12:26 PM
To: ozDotNet
Subject: SPAM-MED: Re: IIS Express

 

Bugger.

On Wed, Jun 30, 2010 at 12:01 PM, Michael Lyons maill...@ittworx.com
wrote:

Looks interesting, but I can only think of it being useful in a locked down
environment, otherwise you would just use either IIS or ASP.Net dev server.

 

In regards to the productivity power tools, I tried it last week and the
experience was terrible. VS just went from running nice and smooth to an
absolute hog, while opening files took anywhere from 30 seconds to 5
minutes.

There was also an update in that time, but it just made things worse. IMHO
I'd be hanging out for a few more release cycles before trying. 

 

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Geoff Appleby
Sent: Wednesday, 30 June 2010 9:31 AM
To: ozDotNet
Subject: SPAM-MED: Re: IIS Express

 

very cool. i like it :)

what i like even more was a slightly earlier post about the productivity
power tools.

http://weblogs.asp.net/scottgu/archive/2010/06/09/visual-studio-2010-product
ivity-power-tool-extensions.aspx

 



 

On Wed, Jun 30, 2010 at 8:40 AM, noonie neale.n...@gmail.com wrote:

ScottGu has announced the impending release of a beta version of IIS
Express:-  

 


http://weblogs.asp.net/scottgu/archive/2010/06/28/introducing-iis-express.a
spx
http://weblogs.asp.net/scottgu/archive/2010/06/28/introducing-iis-express.as
px

 

This might make development a bit easier.

 

The comments provide interesting reading and indicate some more
tools/utilities in the wind.

 

-- 

Regards,

noonie

 




-- 
Geoff Appleby
New Blog! http://www.crankygoblin.com/geoff




-- 
Geoff Appleby
New Blog! http://www.crankygoblin.com/geoff



Re: Permission denied

2010-05-13 Thread Paul Glavich
That would be my first guess. Perhaps a service accounts cress have  
expired or an element of infrastructure has changed that prevent a  
previously successful auth.


 - Glav

Sent from my iPhone

On 13/05/2010, at 4:34 PM, Mark Kemper markkemp...@gmail.com wrote:


Could there be a active directory issue?


On 13/05/2010, at 11:24, Dylan Tusler dylan.tus...@sunshinecoast.qld.gov.au 
 wrote:


We've suddenly (this week) started getting Permission denied  
errors on some of our internal websites (page loads fine, but a  
little Error on page. appears in the bottom left corner, and  
behind it is a Permission Denied error. Some functionality  
doesn't work, specifically it appears to be choking on a JavaScript  
POST to another page on the same site.)


Also, a number of users are reporting that they are being prompted  
to log in to web pages that previously never prompted for  
credentials.


One site in particular is heavily affected, and looking at the  
site, I can't see anything having changed (no web.config changes,  
no permissions changes, etc.)


I've looked at everything I can think of, including running  
ethereal traces and logging vast amounts of procmon logs. Nothing  
untoward appears that I can see.


Can anyone take a stab at what might be going on? I'm just getting  
frustrated with it.


Cheers
Dylan Tusler



--- 
--- 
--- 
--- 
--- 
--- 
--- 
--- 
--- 
--- 
---
To find out more about the Sunshine Coast Council, visit your local  
council office at Caloundra, Maroochydore, Nambour or Tewantin. Or,  
if you prefer, visit us on line at www.sunshinecoast.qld.gov.au


This email, together with any attachments, is intended for the  
named recipient(s) only. Any form of review, disclosure,  
modification, distribution and or publication of this email message  
is prohibited without the express permission of the author. Please  
notify the sender immediately if you have received this email by  
mistake and delete it from your system. Unless otherwise stated,  
this email represents only the views of the sender and not the  
views of the Sunshine Coast Regional Council.

maile 3_0_0


RE: [OT] Visual Studio 2010 RC

2010-03-26 Thread Paul Glavich
 , but it's simply not as productive to be modifying just raw xaml IMHO.
I am in the same boat as you. I really like the XAML designer in 2010 and am
better in that, than in blend. Having said that, watching someone proficient
in blend work some magic really shows how productive you can be in it. For
me though, I really have a tough time adapting to its UI and flow. I use it
for animations mostly and some styling and resort to 2010 for the rest.
Thats probably the dev overpowering what little creative aspect I have.

- Glav

-Original Message-
From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of ton...@tpg.com.au
Sent: Friday, 26 March 2010 10:59 AM
To: ozdotnet@ozdotnet.com
Subject: [OT] Visual Studio 2010 RC

Hi all,

I've been reading some of the comments on Scott Guthrie's blog, and there
seem to be quite a few people asking for an RC2 of Visual Studio 2010.  I
think I agree with this, because I'm not convinced RC1 was anything more
than a Beta anyway. I mean, how could it really be a Release Candidate if it
was delivered with major pieces missing? There's nothing worse than having
to install something and then apply a whole list of patches to get it to
behave the way you want it. 
What a waste of time that is.

Also, I noted a number of people commenting on whether they were going to
take up VS2010. We will be going with VS2010 when it is released, provided
that the feature set is not less than what we currently have, the speed is
at least as good, the memory footprint doesn't grind our systems to a halt
(open up 2 or more instances of VS2008 and the system eventually crashes,
but that's something we find we do frequently) and the upgrade process is
relatively straight forward. 

We attempted to upgrade our project to VS2010RC, but it had a number of
issues, including problems with nested controls, etc. I know there is a
patch out for that, but still, that would require everyone on my team to run
those patches, so we won't be attempting to try it out just yet. So we are
waiting for a more stable release. When that occurs, we'll probably switch
over at a point in time that is convenient to us. Project phases are pretty
short these days, so if Microsoft provides us with a relatively painless
upgrade process, we'll probably go ahead and do it between phases.

The major new feature that we care about is the built in xaml designer. It's
always handy to get a rough visual feel for what we are constructing. Sure,
it's not as good as Blend, but considering how resource intensive it is to
run both Blend and VS2008, I think it will be quite handy for my team. As it
is, most of us won't open Blend and do most of our work constructing and
tweaking raw xaml. I know there are people who feel superior because they
can do that, but it's simply not as productive to be modifying just raw xaml
IMHO.

Regards,
Tony




RE: ASP.NET MVP

2010-03-25 Thread Paul Glavich
I tend to take a varied approach.

 

If the app is of low-medium complexity, I use a direct approach by
constructing the presenter in the view itself manually, passing the view
instance to the presenter as part of the constructor. Its direct and low in
complexity. Construction overhead is trivial. I pass events onto the
presenter by simply calling methods of that presenter.

 

If requirements are more complex, such as many dependencies within the
presenter, perhaps the need to communicate between presenters and a higher
degree of loose coupling, then events on the view with the presenter
subscribing to view events is a good way to go. More indirect and a little
more work but very workable.

 

Indirection and loose coupling is good, but I try and measure the degree
required and benefits, rather than a cover all approach to every single
solution as it may introduce extra complexity or dev effort just for the
sake of it. Much like webformsmvp, when using events, there is a temptation
to write framework code to auto subscribe, using techniques such as
reflection to achieve this. There are computational/perf issues which you'd
need to consider when going this approach.

 

Testability wise, both are equal.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Winston Pang
Sent: Thursday, 25 March 2010 10:39 AM
To: ozdotnet@ozdotnet.com
Subject: ASP.NET MVP

 

Alright guys,

So at work, we're looking to use MVP in our project. Can't use any
frameworks, so webformsmvp is out of the question.

The pattern seems trivial enough, but there seems to be a variety of
implementations. One thing I can't pick from is:

1) Add events on the View interface and have the view raise these events
that the Presenter has subscribed to, ticks for looser coupling, but I'm not
sure what the ramifications are for testing, would it mean when I mock my
view, I'd have to raise these events on the view.
2) Have the underlying operations exposed on the Presenter, and have the
View invoke the operations, but more coupling, but it means when I'm testing
I could just invoke the operations on the presenter and test the results on
the View.


What's everyones approach on this?

Cheers,


Winston 



RE: ASP.NET Web Forms vs MVC vs ...

2010-03-25 Thread Paul Glavich
AFAIK, from the ASP.NET teams perspective, controls are for webforms. Helper
methods are for MVC. A few control vendors were wondering how to further
their control offerings with respect to MVC and all (I know of anyway) are
offering control helper methods. This appears to be the MVC way.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Paul Stovell
Sent: Wednesday, 24 March 2010 4:03 PM
To: ozDotNet
Subject: Re: ASP.NET Web Forms vs MVC vs ...

 

I'd think a primary driver would be for the advantages of imperative code -
it's very easy to do this:

 

% foreach (var customer in Model.Customers) { %

   li%= customer.Name %/li

% } %

 

And requires no changes to the ASPX compilation engine. Using tags, the
current ASPX renderer would try to treat them as controls, which would mean
using repeaters to iterate, which means binding, which potentially means
events and a more complete page lifecycle - not just simple page rendering
anymore. 

 

That said, custom View Engines for ASP.NET MVC like Spark or NVelocity can
give you a tag-like rendering syntax:

 

   li each=var customer in Model.Customers${customer.Name}/li

 

http://sparkviewengine.com/

 

Paul

 

On Wed, Mar 24, 2010 at 2:37 PM, ton...@tpg.com.au wrote:

Actually, I have a question that might extend this thread just a little bit
more - why do we need to
have yellow code in our mvc forms when Microsoft could have written user
controls that generate
the required code? Was it just so they could introduce Linq into the forms?
Or did they just not get
around to creating the tags?

forget what the syntax of asp.net mvc is, but I think something along the
lines of
mvc:Form ID=mvcForm runat=server
  Customer Name: mvc:TextBox ID=mvcCustomerNameTextBox runat=server/
/mvc:Form
instead of the current yellow code equivalent?





On Wed, Mar 24th, 2010 at 1:03 PM, Liam McLennan liam.mclen...@gmail.com
wrote:

 The original question implied that Umbraco are rewriting for the
 purpose of
 moving to MVC. I think it is much more likely that they were planning
 to
 rewrite for other reasons and decided to take the opportunity to
 switch to
 the superior platform. Rewriting a large app just to switch from
 webforms to
 mvc would be stupid in most cases.

 On Mon, Mar 22, 2010 at 5:42 PM, silky michaelsli...@gmail.com
 wrote:

  On Mon, Mar 22, 2010 at 6:00 PM, Paul Glavich
  subscripti...@theglavs.com wrote:
   I am not arguing for or against webforms but the previous
 argument around
   battling with the likes of grid events doesn't really do the
 argument
   justice. I mean, if u don't like the grid events, use a repeaters
 and
  push
   whatever u want down the wire. You can still iterate over
 collections in
   webforms just like MVC and output whatever goo you like.
  
   It's interesting tho as the event model that you say you battle
 so much
  with
  
   IS a compelling piece for many other devs. As always, horses for
 courses.
   I haven't seen anybody mention model binders yet which I find a
  compelling
   yet conceptually simple piece of MVC.
 
  It's[1] only marginally different from using an object data source,
 surely?
 
  This is what seems so useless about this entire thread.
 
  I think everyone has something different in mind when they compare
 one
  thing and another.
 
  *shrug*, to quote Woody Allen (or Larry David, in character)
 whatever
  works. Doesn't make sense to be blindingly in love with one
  particular method or anything (exceptions are obvious [and
  hilarious]).
 
 
   - Glav
 
  --
  silky
 
   http://www.programmingbranch.com/
 
  [1] Reference:
 http://msdn.microsoft.com/en-us/library/dd410405.aspx
 



 --
 Liam McLennan.

 l...@eclipsewebsolutions.com.au
 http://www.eclipsewebsolutions.com.au








-- 
Paul Stovell



RE: ASP.NET MVP

2010-03-25 Thread Paul Glavich
Hey Winston,

 

Not sure what client your working with currently, but feel free to email me
directly on glav @ aspalliance.com if you'd like to discuss it further.
Happy to help where possible.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Winston Pang
Sent: Thursday, 25 March 2010 11:17 PM
To: ozDotNet
Subject: Re: ASP.NET MVP

 

Interesting, well at least, I'm on the right track, and I'm not exactly
going insane :).

Btw, Paul, I read a document at work, from the current client I'm working
on, and it so happens you reviewed it too, I assume you wrote about the MVP
section on it?

Back on topic, I'm still struggling a bit to see how using an IoC container
fits into the context of testing, I mean, I mock my view's, and my model's
in some cases, but I require the real presenter anyway, since that's what
I'm verifying against, so where does IoC fit in with testing and in
conjunction with MVP per se?

Cheers,

Winston

On Thu, Mar 25, 2010 at 11:06 PM, Paul Glavich subscripti...@theglavs.com
wrote:

I tend to take a varied approach.

 

If the app is of low-medium complexity, I use a direct approach by
constructing the presenter in the view itself manually, passing the view
instance to the presenter as part of the constructor. Its direct and low in
complexity. Construction overhead is trivial. I pass events onto the
presenter by simply calling methods of that presenter.

 

If requirements are more complex, such as many dependencies within the
presenter, perhaps the need to communicate between presenters and a higher
degree of loose coupling, then events on the view with the presenter
subscribing to view events is a good way to go. More indirect and a little
more work but very workable.

 

Indirection and loose coupling is good, but I try and measure the degree
required and benefits, rather than a cover all approach to every single
solution as it may introduce extra complexity or dev effort just for the
sake of it. Much like webformsmvp, when using events, there is a temptation
to write framework code to auto subscribe, using techniques such as
reflection to achieve this. There are computational/perf issues which you'd
need to consider when going this approach.

 

Testability wise, both are equal.

 

-  Glav

 

From: ozdotnet-boun...@ozdotnet.com [mailto:ozdotnet-boun...@ozdotnet.com]
On Behalf Of Winston Pang
Sent: Thursday, 25 March 2010 10:39 AM
To: ozdotnet@ozdotnet.com
Subject: ASP.NET MVP

 

Alright guys,



So at work, we're looking to use MVP in our project. Can't use any
frameworks, so webformsmvp is out of the question.

The pattern seems trivial enough, but there seems to be a variety of
implementations. One thing I can't pick from is:

1) Add events on the View interface and have the view raise these events
that the Presenter has subscribed to, ticks for looser coupling, but I'm not
sure what the ramifications are for testing, would it mean when I mock my
view, I'd have to raise these events on the view.
2) Have the underlying operations exposed on the Presenter, and have the
View invoke the operations, but more coupling, but it means when I'm testing
I could just invoke the operations on the presenter and test the results on
the View.


What's everyones approach on this?

Cheers,


Winston 

 



Re: ASP.NET Web Forms vs MVC vs ...

2010-03-22 Thread Paul Glavich
I am not arguing for or against webforms but the previous argument  
around battling with the likes of grid events doesn't really do the  
argument justice. I mean, if u don't like the grid events, use a  
repeaters and push whatever u want down the wire. You can still  
iterate over collections in webforms just like MVC and output whatever  
goo you like.


It's interesting tho as the event model that you say you battle so  
much with IS a compelling piece for many other devs. As always, horses  
for courses.


I haven't seen anybody mention model binders yet which I find a  
compelling yet conceptually simple piece of MVC.


- Glav



Sent from my iPhone

On 22/03/2010, at 11:29 AM, David Connors da...@codify.com wrote:


On 22 March 2010 09:16, Craig van Nieuwkerk crai...@gmail.com wrote:
One of the things I find that makes MVC so productive is it doesn't
have lots of control toolkits. Wrestling with grid databind events to
make a row format correctly or chasing event handlers all over the
place trying to work out if you need to use the OnShow, OnMakeVisible,
OnEnable, OnPaint, OnDisplay event is a nightmare I am happy to see
the back of.

+1 and I'd argue that unless you're trying to do a wholesale web-as- 
a-platform move ala GWT, event driven programming doesn't really  
make a lot of sense given how the underlying technology works.


I always find it telling that the question of reach for the app (for  
Internet facing stuff that makes coin) rarely comes up in these  
technology threads.


--
David Connors (da...@codify.com)
Software Engineer
Codify Pty Ltd - www.codify.com
Phone: +61 (7) 3210 6268 | Facsimile: +61 (7) 3210 6269 | Mobile: +61 417 189 
363
V-Card: https://www.codify.com/cards/davidconnors
Address Info: https://www.codify.com/contact



Re: SPAM-LOW: Re: ASP.NET Web Forms vs MVC vs ...

2010-03-22 Thread Paul Glavich
You'd think you would have stopped participating in this thread by now  
then.


At any rate, I find the opinions and perceptions interesting.

- Glav

Sent from my iPhone

On 22/03/2010, at 6:42 PM, silky michaelsli...@gmail.com wrote:


On Mon, Mar 22, 2010 at 6:00 PM, Paul Glavich
subscripti...@theglavs.com wrote:
I am not arguing for or against webforms but the previous argument  
around

battling with the likes of grid events doesn't really do the argument
justice. I mean, if u don't like the grid events, use a repeaters  
and push
whatever u want down the wire. You can still iterate over  
collections in

webforms just like MVC and output whatever goo you like.

It's interesting tho as the event model that you say you battle so  
much with


IS a compelling piece for many other devs. As always, horses for  
courses.
I haven't seen anybody mention model binders yet which I find a  
compelling

yet conceptually simple piece of MVC.


It's[1] only marginally different from using an object data source,  
surely?


This is what seems so useless about this entire thread.

I think everyone has something different in mind when they compare one
thing and another.

*shrug*, to quote Woody Allen (or Larry David, in character) whatever
works. Doesn't make sense to be blindingly in love with one
particular method or anything (exceptions are obvious [and
hilarious]).



- Glav


--
silky

 http://www.programmingbranch.com/

[1] Reference: http://msdn.microsoft.com/en-us/library/dd410405.aspx