Thursday, March 11, 2010

Lambda Expressions in C#.NET

Lambda Expressions

Lambda Expressions make things even easier by allowing you to avoid the anonymous method and that annoying statement block:

class Program
{
static void Main(string[] args)
{
List names = new List();
names.Add("Dave");
names.Add("John");
names.Add("Abe");
names.Add("Barney");
names.Add("Chuck");
string abe = names.Find((string name)

=> name.Equals("Abe"));
Console.WriteLine(abe);
}
}

Because Lambda Expressions are smart enough to infer variable types, I don't even have to explicity mention that name is a string above. I can remove it for even more simplicity and write it as such:

class Program
{
static void Main(string[] args)
{
List names = new List();
names.Add("Dave");
names.Add("John");
names.Add("Abe");
names.Add("Barney");
names.Add("Chuck");
string abe = names.Find(name =>

name.Equals("Abe"));
Console.WriteLine(abe);
}
}

Now there is no particular reason why I have to use name as my variable. Often developers will use 1 character variable names in Lambda Expressions just to keep things short. Here we replace name with p and all works the same.

class Program
{
static void Main(string[] args)
{
List names = new List();
names.Add("Dave");
names.Add("John");
names.Add("Abe");
names.Add("Barney");
names.Add("Chuck");
string abe = names.Find(p => p.Equals("Abe"));
Console.WriteLine(abe);
}
}

Lambda Expressions Using a Customer Class
I used a list of strings above, but you can just as easily use a list of objects to do the same thing. I will take an abbreviated form of a Customer Class:

public class Customer
{
public int Id;
public string Name;
public string City;
public Customer(int id, string name, string city)
{
Id = id;
Name = name;
City = city;
}
}

and now use Lambda Expressions to find a particular Customer with a name of "Abe".

class Program
{
static void Main(string[] args)
{
List customers = new List();
customers.Add(new Customer(1,"Dave","Sarasota"));
customers.Add(new Customer(2,"John","Tampa"));
customers.Add(new Customer(3,"Abe","Miami"));
Customer abe = customers.Find(c =>

c.Name.Equals("Abe"));
}
}

Again, we could make things more obvious by explicity saying that c is of type Customer, but I wouldn't bet on many people doing it :) You could write the above as follows:
class Program
{
static void Main(string[] args)
{
List customers = new List();
customers.Add(new Customer(1,"Dave","Sarasota"));
customers.Add(new Customer(2,"John","Tampa"));
customers.Add(new Customer(3,"Abe","Miami"));
Customer abe = customers.Find((Customer c) =>

c.Name.Equals("Abe"));
}
}

Combining Extension Methods and Lambda Expressions for Help In Finding Our Customer
Let's say we have a custom CustomerCollection Class not within our control that doesn't provide us an easy way to find a Customer:


public class CustomerCollection :

IEnumerable
{
IEnumerable _customers;
public CustomerCollection(IEnumerable

customers)
{
_customers = customers;
}
public IEnumerator GetEnumerator()
{
foreach (Customer customer in _customers)
yield return customer;
}
System.Collections.IEnumerator System.Collections

.IEnumerable.GetEnumerator()
{
return _customers.GetEnumerator();
}
}

Let's add an Extension Method to the CustomerCollection Class, called GetCustomer:

public static class CustomerExtensions
{
public static Customer GetCustomer(this

CustomerCollection customers,

Predicate isMatch)
{
foreach (Customer customer in customers)
if (isMatch(customer))
return customer;
return null;
}
}

And now, we can use the CustomerCollection Class to easily find Abe like:


class Program
{
static void Main(string[] args)
{
CustomerCollection collection = GetCustomers();
// Using my GetCustomer Method Extension...
Customer abe = collection.GetCustomer(c =>

c.Name.Equals("Abe"));
}

// Pretend We Don't See This :)
static CustomerCollection GetCustomers()
{
List customers = new List();
customers.Add(new Customer(1,"Dave","Sarasota"));
customers.Add(new Customer(2,"John","Tampa"));
customers.Add(new Customer(3,"Abe","Miami"));

return new CustomerCollection(customers);
}
}

This Is LINQ!
But, of course, since CustomerCollection implements IEnumerable, in this case, IEnumerable, screw the Extension Methods and just use LINQ:

class Program
{
static void Main(string[] args)
{
CustomerCollection collection = GetCustomers();
// LINQ
var abe = collection.Single(c =>

c.Name.Equals("Abe"));
}

// Pretend We Don't See This :)
static CustomerCollection GetCustomers()
{
List customers = new List();
customers.Add(new Customer(1,"Dave","Sarasota"));
customers.Add(new Customer(2,"John","Tampa"));
customers.Add(new Customer(3,"Abe","Miami"));

return new CustomerCollection(customers);
}
}





We collect videos & images from online sites like youtube and some other websites which provide them. And most of the News is also gathered from other online websites. Any material downloaded or otherwise obtained through the use of the service is done at your own discretion and risk and that you will be solely responsible for any damage to your computer system or loss of data that results from the download of any such material.
If you feel that the content on the site is illegal or Privacy please contact us at srinuchalla@gmail.com and such videos, images or any Content will be removed with immediate effect.

Tuesday, March 9, 2010

Introduction to Microsoft C# 2008

Microsoft’s C# programming language first appeared in the early 2000s. Microsoft’s design goals were to create a clean, concise programming language that was redesigned from the ground up based on the new programming paradigms that the Internet created. Over the past nine years, C# has become one of the most popular programming languages in the world. In this course, Mark Long introduces you to the basics of the C# language and gives you a foundation to continue to develop your skillset with C#.

Course: Introduction to Microsoft C# 2008
Author: Mark Long
SKU: 34046
Release Date: 2009-10-09
Duration: 7 hrs / 76 tutorials
__________________________________________________ ____

Content:

Welcome:
Introduction (02:55) T
Course Overview (03:41) T
History of C# (05:48) T
Programming Challenges (06:38) T
Visual Studio 2008 Versions (07:15) T

Visual Studio 2008:
Visual Studio & Virtual PC (06:12) T
Getting Visual Studio 2008 (03:11) T
Installing Visual Studio 2008 (06:57) T
Visual Studio Tour pt. 1 (06:31) T
Visual Studio Tour pt. 2 (07:05) T
Understanding Solutions & Projects pt. 1 (02:49) T
Understanding Solutions & Projects pt. 2 (03:44) T
Your First C# Program (06:52) T
Commenting Your Code (05:53) T
Collapse & Expand Code (05:01) T
Code Snippets (05:02) T
Refactoring Code (04:39) T

C# Essentials:
Data Types (05:58) T
Built-in Value Types (06:21) T
C# Variables (03:07) T
Casting pt. 1 (06:24) T
Casting pt. 2 (05:45) T
Strings (03:46) T
Converting (04:01) T
Formatting (04:06) T
Understanding Scope (05:06) T
Constants (04:39) T
Expressions & Operators (05:41) T

C# Control Structures:
Relational & Logical Operators (04:55)
If Statements (06:27)
Switch Statements (05:55)
While Loop (05:58)
DoWhile Loop (05:16)
For Loop (06:13)

Methods & Events:
Coding & Calling Methods (03:56)
Method Parameters (05:02)
Event Handlers (05:42)
Handling Multiple Events (06:23)

Exceptions:
Exception Basics (07:03)
Catching an Exception (06:02)
System Exception Classes (04:58)
Throwing an Exception (06:30)
Validating Data (06:44)

Arrays & Collections:
Array Basics (06:39)
Using an Array (06:26)
Array Data (06:50)
Inserting Data into the Array (06:07)
Two Dimensional Arrays (04:27)
Jagged Arrays (04:11)
Collection Basics (06:01)
Typed & UnTyped Collections (06:30)
Using a List pt. 1 (06:56)
Using a List pt. 2 (04:20)
Sorted Lists pt. 1 (06:29)
Sorted Lists pt. 2 (03:47)
Queues (06:05)
Stacks (05:55)

Dates & Time:
DateTime Basics (05:58)
Formatting DateTime (04:26)
DateTime Math (04:14)

Debugging:
Debugging Basics (06:43)
Edit & Continue (04:43)
Using Breakpoints (04:02)
Debugging Windows (05:05)

C# Classes:
OOP Basics (06:57)
Creating a Class (06:05)
Instantiating an Object (07:06)
Class Tools (05:18)
C# Structures (03:35)
Constructors (06:45)
Class Indexers (04:06)

Delegates:
Delegates pt. 1 (06:40)
Delegates pt. 2 (06:35)

Download link folder:
http://hotfile.com/list/297643/5f7b19a


Code:1*346.4mb





Nice book.... Enjoy reading....!


Introduction to Microsoft C# 2008





We collect videos & images from online sites like youtube and some other websites which provide them. And most of the News is also gathered from other online websites. Any material downloaded or otherwise obtained through the use of the service is done at your own discretion and risk and that you will be solely responsible for any damage to your computer system or loss of data that results from the download of any such material.
If you feel that the content on the site is illegal or Privacy please contact us at srinuchalla@gmail.com and such videos, images or any Content will be removed with immediate effect.

Sunday, March 7, 2010

Windows Communication Foundation Basics

Getting Started:

The topics contained in this section are intended to give you quick exposure to the Windows Communication Foundation (WCF) programming experience. They are designed to be completed in the order of the list at the bottom of this topic. Working through this tutorial gives you an introductory understanding of the steps required to create WCF service and client applications. A service is a construct that exposes one or more endpoints, each of which exposes one or more service operations. The endpoint of a service specifies an address where the service can be found, a binding that contains the information that a client must communicate with the service, and a contract that defines the functionality provided by the service to its clients.

After you work through the sequence of topics in this tutorial, you will have a running service, and a client that can invoke the operations of the service. The first three topics describe how to define a service with a contract, how to implement the service, and how to configure the service in code, host, and run the service. The service that is created is self-hosted and the client and service run on the same machine. The service is configured using code rather than configuration. Services can also be hosted under Internet Information Services (IIS). For more information about how to do this, see How To: How to: Host a WCF Service in IIS. Services can also be configured within a configuration file. For more information about using a configuration file see Configuring Services Using Configuration Files.

The next three topics describe how to create a client proxy, configure the client application, and create and use a client that can access the functionality of the service. Services publish metadata that can be accessed that define the constructs a client application needs to communicate with the service operations. WCF provides a ServiceModel Metadata Utility Tool (Svcutil.exe) to automate the process of accessing this published metadata and using it to construct and configure the client application for the service.

All of the topics in this section assume you are using Visual Studio 2008 as the development environment. If you are using another development environment, ignore the Visual Studio specific instructions.


How to: Define a Windows Communication Foundation Service Contract

This is the first of six tasks required to create a basic Windows Communication Foundation (WCF) service and a client that can call the service. For an overview of all six of the tasks, see the Getting Started Tutorial topic.

When creating a basic WCF service, the first task is to define a contract. The contract specifies what operations the service supports. An operation can be thought of as a Web service method. Contracts are created by defining a C++, C#, or Visual Basic (VB) interface. Each method in the interface corresponds to a specific service operation. Each interface must have the ServiceContractAttribute applied to it and each operation must have the OperationContractAttribute applied to it. If a method within an interface that has the ServiceContractAttribute does not have the OperationContractAttribute, that method is not exposed.



http://msdn.microsoft.com/en-us/library/system.servicemodel.servicecontractattribute.aspx

http://msdn.microsoft.com/en-us/library/ms731835.aspx


We collect videos & images from online sites like youtube and some other websites which provide them. And most of the News is also gathered from other online websites. Any material downloaded or otherwise obtained through the use of the service is done at your own discretion and risk and that you will be solely responsible for any damage to your computer system or loss of data that results from the download of any such material.
If you feel that the content on the site is illegal or Privacy please contact us at srinuchalla@gmail.com and such videos, images or any Content will be removed with immediate effect.

Friday, March 5, 2010

How COM+ manages transactions & state

How COM+ manages transactions & state
Overview:


I will attempt to give a quick overview of how COM+ goes about its business. Quite a few articles and books are available on the topic by acclaimed authors. I can recommend the book “Programming Distributed Applications with COM+ and Microsoft Visual Basic 6” by Ted Pattison, or an article he wrote for Microsoft Systems Journal in October 1999, titled “Writing COM+-style Transactions with Visual Basic and SQL Server”. The following overview provides a very brief summary of the key functionality of COM+. Understanding the impact of each of these functions plays a crucial role in understanding how to use them in your reusable objects.
COM+ provides a programming model based on declarative transactions. Setting the “MTSTransactionMode” property of the class sets an object’s transaction mode at design time. When the COM+ runtime creates a new object, it examines the component’s “MTSTransactionMode“ setting to determine whether the new object should be created inside a transaction. Once an object has been created, you cannot add it to a transaction, neither can you disassociate it from the transaction in which it has been created. Once an object is created inside a transaction, it spends its entire lifetime inside this transaction. When the transaction is committed or aborted, the COM+ runtime destroys all the objects inside it.
When the COM+ runtime receives a request to create a new object, it inspects the transaction context of the creator to determine whether the creator is running inside a transaction. It also inspects the transaction support attribute of the new object. Using these two pieces of information it decides whether to create the object 1) inside a new transaction, 2) inside the same transaction as its creator or 3) without a transaction.
Error: Reference source not found Shows a diagram of a client application connecting to an object (the root object) through a COM+ context. The root object in turn instantiated a secondary object in the same COM+ transaction. The root object connects to the secondary object through another COM+ context. Error: Reference source not found Also shows three important Boolean values that COM+ uses to manage the scope of the transaction and the object lifetime. In Error: Reference source not found these flags are set to their default values. For a detailed discussion on how COM+ uses this three flags, refer to chapter 10 of the book by Ted Pattison mentioned above.


Figure 1:Root object and Secondary object in COM+ transaction
The root COM+ object plays a very important role in a COM+ transaction. Every transaction as a whole maintains a Boolean value (MustAbort) that indicates whether the transaction should be aborted. This value is initially set to False when the transaction is created. COM+ inspects the value of MustAbort when:
The root object returns control to its caller. If MustAbort is False nothing happens. If it is True, COM+ aborts the transaction and deactivates all the secondary objects inside the transaction.
The root object is deactivated.
Depending on the value of MustAbort, COM+ will then either commit or abort the transaction. Whatever the outcome of the transaction, when the transaction completes, COM+ will recycle the object(s) that took part in the transaction. This means that the lifetime of the transaction plays an enormous part in designing the interaction between objects partaking in either the same or different transactions.
There are two method pairs available to the developer with which to control the transaction scope and outcome. These are DisableCommit/Enablecommit and SetComplete/SetAbort.
The DisableCommit and EnableCommit method allows you to set the value of the IsHappy flag. When an object running inside a transaction is deactivated, and IsHappy is False, the transaction’s MustAbort flag is set to true, which will cause the transaction to be aborted. Note that this is method only sets the value of IsHappy; it does not perform any other logic. This means that the value of this flag can be changed multiple times, but the value is only taken into account when the object is deactivated.
The SetComplete and SetAbort method sets the value of IsHappy to either True or False, plus it sets the value of IsDone to True. The value of the IsDone flag is examined every time an object returns control to its caller. If the IsDone value is True, COM+ deactivates this object, and the normal process to determine whether to commit or abort the transaction is started.
From the above it is clear that, if you are developing an object that may be reused by other objects on the BO level, care has to be taken when calling SetComplete and SetAbort. If your object participates in a transaction as a secondary object, and SetComplete or SetAbort is called, this object will be recycled (and the instance state will be lost) when it returns control to the calling object. This may not be what the calling object was expecting. An example of this is: In Object B, the Save method does a SetComplete at successful completion. Object A uses an instance of Object B – it loads an instance of Object B, does some work and saves, it does some more work and want to save it again. Problem is that after the first time Save is executed, Object B is recycled when it returns control to Object A, and the instance state is lost. Object A is not aware of this, as it actually connects to the context and not directly to Object B. The next time Object A tries to access any method or property of Object B, a new instance of Object B is created within the existing object context. Object A is totally unaware that it is now working with another instance of Object B. If Object A now performs logic that is dependant on the state of Object B, it will produce erroneous results.
Issues such as the above means that you cannot call SetComplete unless you are absolutely sure that no other object expects you to hold your state. As your component will be reused on a binary level, it has no guarantee who uses it or exactly how it is being used. This means that great care has to be taken when calling SetComplete in your component, and that you have to inform any developer of another BO-level calling component of this fact, at design time. You can alert the user of your component to the fact that you are using SetComplete in a method by documentation or a naming convention.
Any BO level component should always only be accessed by its CO counterpart component from the CO level. Because a component and its CO level partner will always be designed and implemented together, the CO object can take the usage of SetComplete into account in the design stage.
When designing interaction between COM+ objects, I make use the following rules (which I like to refer to as the “COM+ State Rules”):
Non Transactional COM+ objects can maintain instance state
Transactional COM+ objects maintains instance state for the duration of the transaction
Transactional COM+ objects cannot maintain instance state across transaction boundaries
Transaction scope can be managed by object scope
Transaction scope can be managed by calling SetComplete/SetAbort
Acquire resources late, release them early
Method Explained
The core concept behind the method I follow is to realize other BO objects as the primary users of your reusable BO object, with the BO object’s own CO counterpart seen as a secondary user of the object. This is quite a significant paradigm shift from seeing the CO object as the primary user of its BO counterpart. When the CO object is seen as the primary user, all methods on the BO object is normally geared towards a) getting state information from the CO, b) performing an action on this state and c) returning the changed state to the CO. When you view other BO objects as the primary users of your BO level object, the interfaces to the functionality provided by the object changes from simple Get-In, Do-your-stuff and Return-Results type of approach, to a full OOP interface. The BO object needs to expose its methods and properties via an OOP interface to other BO objects. All the functionality available to other BO level objects must however also be available to the object’s own CO counterpart. There must exist only one set of business logic on the BO layer, accessible to the BO and the CO layer in the following ways:
A BO object (Object A) can instantiate an instance of another BO object (Object B). Object B exposes an OOP interface to Object A through which Object A is able to call methods and set and retrieve properties of Object B. Object A can control the scope of the instance of Object B and by doing so, the scope of the transaction
A CO object is able to call the business logic in its corresponding BO object via a wrapper function. It calls a function on the BO object and passes its instance state into this function in one or more state variables. The function executes the same method exposed to other BO level objects, and when it is done, the CO can retrieve the changed instance state from the state variables. The CO object does not expect the Bo object to maintain state between method calls, and sends and retrieves the instance state to and from the BO object with every function call.
The above requirements are met by following a very simple recipe on the BO object. The functionality on BO level is provided in plain object oriented fashion. The BO object has property procedures with private variables to maintain the instance state, and functionality is provided as methods that act on the properties of the object (and any secondary objects). Any BO object is trusted to never call SetComplete when accessed by other BO objects (We will examine the use of SetAbort in more detail when we look at error handling). If you have a look at the “COM+ State Rules”, you will see that a root object then only has to worry about the scope of its secondary objects, as object instance state and transaction management is done using the functionality provided by COM+.
CO access to the methods on the BO is provided by a wrapper function on the BO object. Note that I refer to it as a wrapper “function” and not a “method”. This is purely a naming convention I use to distinguish between the methods exposed for use by other BO objects (called methods), and the methods exposed for use by CO objects (I call these functions). The reason I refer to this specific type of method as a “function” is because it receives all the information it needs via the parameter list, as opposed to a normal method that acts on the existing state of an object. So, this wrapper “function” is implemented as a normal method of the object, but it takes as input a variable or variables carrying the total instance state of the object. This wrapper function is very simple, as it performs a maximum of three tasks:
The instance state as sent from the CO object is copied into the local state variables of the BO object
The corresponding method is executed
The local instance state is copied from the local variables into the “By Reference” variables from where the CO object will have access to them
Depending on the logic in the method, step 1 or 3 may be left out. If the method is used to retrieve instance state from the database, step 1 may be left out, while step 3 may be left out if the method does not make any changes to the instance state of the object.
On the CO object, the methods that call the functions on the BO object follow a similar pattern. The methods generally consist of the following steps
The local instance state is packed into a variable
An instance of the BO object is created
The BO function is called and the local instance state is passed to this function
The BO object is destroyed
The instance state is unpacked into the local variables
Depending on the logic of the method executed, it may not be necessary to send state to the BO method (when loading the object from the database for example), or to retrieve state back from the method (when deleting the object from the database)
So, for every method on the BO object, there is a “function” call to expose this method to its CO object. In order to distinguish between the function and the method, I use the following naming convention: The method and the function gets the same name, with the prefix “f_” added to the start of the name of the function, indicating that this is the wrapper function and only to be called from the CO level.


We collect videos & images from online sites like youtube and some other websites which provide them. And most of the News is also gathered from other online websites. Any material downloaded or otherwise obtained through the use of the service is done at your own discretion and risk and that you will be solely responsible for any damage to your computer system or loss of data that results from the download of any such material.
If you feel that the content on the site is illegal or Privacy please contact us at srinuchalla@gmail.com and such videos, images or any Content will be removed with immediate effect.

.NET INTRODUCTION

. NET - Introduction
Introducing .NET Part 1 - Introduction

Microsoft's .NET initiative is broad-based and very ambitious. It revolves around the .NET Framework, which encompasses the actual language and execution platform, plus extensive class libraries providing rich built-in functionality. At its core, the .NET framework embraces XML and SOAP to provide a new level of integration of software over the Internet. There is also a family of server-based products called .NET Enterprise Servers that are to be the next generation of Microsoft's BackOffice.

The .NET Framework introduces a completely new model for the programming and deployment of applications. In his PDC keynote speech, Bill Gates stated that a transition of this magnitude only comes along once every five to six years. The last comparable shifts were the switch from DOS to Windows in 1990 and the migration from 16-bit to 32-bit development (Windows 3.x to Windows 95/NT) in the mid-nineties.

Developers at PDC were excited by the prospect of a dramatically better architecture for software development within the .NET Framework, even though Microsoft made it clear that released products based on .NET are still well into the future. The first release will probably be Visual Studio.NET, the beta-one of which became available in November 2000. No firm dates are available for the full commercial release but general expectation is for fall 2001 at the earliest.

How important is this to Microsoft? Well, their executives have publicly stated that 80% of R&D resources in 2001 are being spent on .NET and the expectation is that most Microsoft products will be ported onto the .NET platform. Also C# (pronounced C-sharp), a new language specifically created for .NET, looks set to become the standard language for internal development within Microsoft.

A Broad and Deep Platform for the Future

Calling the Microsoft.NET Framework a "platform" doesn't begin to describe how broad and deep it is. It encompasses a virtual machine that abstracts away much of the Windows API from development. It includes a class library with more functionality than any other created to date, and a development environment that spans multiple languages. Further more, it exposes an architecture that makes multiple language integration simple and straightforward.

In short, .NET presents a radically new approach to software development. This is the first development platform designed from the ground up with the Internet in mind. Previously, Internet functionality has simply been bolted on to pre-Internet operating systems like Unix and Windows. This has required Internet software developers to understand a host of technologies and integration issues. .NET is designed and intended for highly distributed software, making Internet functionality and interoperability easier and more transparent to include in systems than ever before.

The vision of .NET is globally distributed systems, using XML as the universal glue to allow functions running on different computers across an organization or across the world to come together in a single application. In this vision, systems from servers to wireless palmtops, will share the same general platform, with versions of .NET available for all of them, and with each of them able to integrate transparently with the others. But this does not leave out classic applications, as we've always known them. .NET also aims to make traditional business applications easier to develop and deploy. Some of the technologies of .NET, such as WinForms, demonstrate that Microsoft has not forgotten the traditional business developer.

Your Introduction to .NET

This book will preview much of the technology and structure of .NET, concentrating on the .NET Framework. It should help the reader understand the magnitude of changes involved in the new technology; together with some of the benefits gained and costs endured. The aim is to enable intelligent decisions to be made about the short-term role of .NET, and establish a foundation from which further study can be conducted.

This first chapter will summarize many of the most important aspects of .NET. We'll start by looking at some of the serious drawbacks of current software development that prompted Microsoft to rethink their entire development structure. Then we'll progress to an overview of the overall vision and the major elements in the .NET Framework. Along the way, this chapter will refer to later chapters, which will fill in the details.

Please remember that this book discusses unreleased products. There will no doubt be many changes during the development cycle. In particular, many of the changes relating to language syntax and features are subject to revision. The bottom line is - don't bet the farm on the information presented here. Be prepared for changes as .NET moves closer to actual production.

Avoiding Confusion - the Role of the .NET Enterprise Servers

Microsoft has already released several products, which they describe as part of the .NET Enterprise Server family. More of these are coming, and most will be released by the time this book is published. Some of the marketing literature for these products emphasizes that they are part of Microsoft's .NET strategy.

However, it is important that you understand the difference between these products and the .NET Framework, which is the major focus of this book. The .NET Enterprise Servers are not built on the .NET Framework. Most of them are successors to previous server-based products, and they use the same COM/COM+ technologies their predecessors did.


Chapter 12 in this book summarizes these products and explains their purposes. These .NET Enterprise Servers still have a major role to play in future software development projects. When actual .NET Framework projects are developed, most will depend on the technologies in the .NET Enterprise Servers for functions like data storage and messaging.

When this book refers to .NET, it should be understood that this is generally intended to mean the technologies in the Microsoft.NET Framework.

What's Wrong With What We Have Now?

Starting in late 1995, Microsoft made a dramatic shift towards the Internet. The company was refocused on marrying their Windows platform to the Internet, and they have certainly succeeded in making Windows a serious Internet platform as well as a solid platform for all the business-oriented software developed with the Windows DNA programming model.

However, Microsoft had to make some serious compromises to quickly produce Internet-based tools and technologies. In particular, Active Server Pages (ASP) has always been viewed as a bit clumsy. After all, writing reams of interpreted script is a real step backwards from structured and object-oriented development. Designing, debugging and maintaining such unstructured code is also a headache.

Other languages such as Visual Basic have been used successfully in Internet applications on Microsoft platforms, but mostly as components that worked through Active Server Pages. Presently, Microsoft's tools lack the level of integration and ease-of-use for web development that would be ideal. A few attempts were made to place a web interface on traditional languages, such as WebClasses in VB, but none of these gained wide acceptance.

Microsoft has attempted to bring some order to the chaos with their concept of Windows DNA applications. DNA paints a broad picture of standard three-tier development based on COM, with Active Server Pages in the presentation layer, business objects in a middle layer, and a relational data store and engine in the bottom layer. The concept behind DNA is reasonably sound, but actually making it work has many challenges. Developing COM components requires a level of development expertise that takes a lot of time to reach, though some languages, such as Visual Basic, make it easier than others. Also, the deployment of DNA applications can be nightmarish, with many problems that can arise from the versioning and installation of components, and the components that they rely on.

Microsoft realized that, while it was possible to write good Internet applications with Windows-based technologies, it was highly desirable to find ways to develop applications faster and make it far easier to deploy them. Other platforms (such as Unix) and other development environments (such as ColdFusion) were continuing to raise the bar for developing Internet applications, making it essential that Microsoft address the limitations of the DNA programming model.



The Origins of .NET

In the beginning 1998, a team of developers at Microsoft had just finished work on a new version of Internet Information Server (version 4.0), including several new features in Active Server Pages. While developers were pleased to see new capabilities for Internet development on Windows NT, the development team at Microsoft had many ideas for its improvement. That team began to work on a new architecture implementing those ideas. This project eventually came to be known as Next Generation Windows Services (NGWS).

After Visual Studio 6 was released in late 1998, work on the next version of Visual Studio (then called Visual Studio 7) was folded into NGWS. The COM+/MTS team brought in their work on a universal runtime for all the languages in Visual Studio, which they intended to make available for third party languages as well.

The subsequent development was kept very much under wraps at Microsoft. Only key Microsoft partners realized the true importance of NGWS until it was re-christened as .NET and introduced to the public at the PDC. At that point, development had been underway for over two years, and most attendees were pleasantly surprised to see the enormous strides Microsoft had made.

The concepts in .NET draw inspiration from many sources. Previous architectures, from p-code in UCSD Pascal up through the Java Virtual Machine, have similar elements. Microsoft has taken many of the best ideas in the industry, combined with some ideas of their own, and brought them all into one coherent package.


We collect videos & images from online sites like youtube and some other websites which provide them. And most of the News is also gathered from other online websites. Any material downloaded or otherwise obtained through the use of the service is done at your own discretion and risk and that you will be solely responsible for any damage to your computer system or loss of data that results from the download of any such material.
If you feel that the content on the site is illegal or Privacy please contact us at srinuchalla@gmail.com and such videos, images or any Content will be removed with immediate effect.

What is 3 Tier Architecture

Why Three Tier Architecture?

The distributed object implementation of client/server computing is going to change the way applications are built. For one, if we needed fault tolerant computing, we could implement copies of objects onto multiple servers. That way if any were down, it would be possible to go to another site for service. With distributed objects being self contained and executable (all data and procedures present) it will be possible for a systems administrator to tune the performance of the network by moving those objects from overloaded hardware to underutilized computers. This approach is called tuning through "drag and drop", referring to the metaphor the administrator uses on a workstation to move the components.
By now the point is made. Client/server architectures are flexible and modular. They can be changed, added to, and evolved in numbers of ways.

As the Internet becomes a significant factor in computing environments client/server applications operating over the Internet will become an important new type of distributed computing. (This is probably an understatement, since the use of Internet and intranet based applications will very shortly dwarf all of the distributed computing initiatives of the past)
The Internet will extend the reach and power of client/server computing. Through its promise of widely accepted standards, it will ease and extend client/server computing both intra and inter-company. The movement in programming languages to the technology of distributed objects is going to happen at light speed - because of the the Internet.


1.0 Disadvantages of two tier architecture

Limited no of Users.
Limitation on expansion of business.
Time critical information processing not possible
Interoperability with different database minimum.
System administration and configuration difficult.
After the fact partitioning:
Batch Jobs

Limited no of Users:
The two-tier design can scale-up to service 100 users on a network. It appears that beyond this number of users, the performance capacity is exceeded. This is because the client and server exchange "keep alive" messages continuously, even when no work is being done, thereby saturating the network.

Limitation on expansion of business:

The two tier architecture works well in relatively homogeneous environments with processing rules (business rules) that do not change very often and when workgroup size is expected to be fewer than 100 users, such as in small businesses

If you are planning to link different hospitals to achieve latitudinal transfer of Medical records, obviously the users will be more than 100 and the performance of the HIS will come down. This can be a major limitation factor.

Time critical information processing not possible:

Two-tier software architecture is used extensively in non-time critical information processing where management and operations of the system are not complex. This design is used frequently in decision support systems where the transaction load is light.

If you are planning to link different hospitals to achieve latitudinal transfer of Medical records, the number of users will be more than 100.The time taken to process the request for the medical record might be a major limiting factor.

Interoperability with different database minimum:

The two-tier architecture limits interoperability by using stored procedures to implement complex processing logic (such as managing distributed database integrity) because stored procedures are normally implemented using a commercial database management system's proprietary language. This means that to change or inter operate with more than one type of database management system; applications may need to be rewritten. Moreover, database management system's proprietary languages are generally not as capable as standard programming languages in that they do not provide a robust programming environment with testing and debugging, version control, and library management capabilities

This is not a problem as long as the Client is happy to use MS SQL Server. This can be a limitation to our HIS as we can only sell to those customers who are comfortable with MS SQL Server and Microsoft Technologies. In US Many hospitals are already having some Hospital software’s running on different databases. If we are able to deliver our HIS on the database they prefer, this will reduce their cost on procuring license.

System administration and configuration difficult:

Two tier architectures can be difficult to administer and maintain because when applications reside on the client, every upgrade must be delivered, installed, and tested on each client. The typical lack of uniformity in the client configurations and lack of control over subsequent configuration changes increase administrative workload.

This leads to developer going to client place for coding and testing till the client is satisfied. As the developer at site is only interested to satisfy the client, he may not follow any standards of software development. He may end up hard coding and introducing fresh bugs, as there may not be any testing team to thoroughly test his changes.

There will be no documentation of the changes made and the version running at different sites of the same module.

The client might exploit the developer’s presence and ask for more. The developer may not be in a position whether it is a billable or non-billable development request.


After the fact partitioning:

Another problem with the 2-tiered approach is that current implementations provide no flexibility in "after the fact partitioning". Once an application is developed it isn't easy to move (split) some of the program functionality from one server to another. This would require manually regenerating procedural code. In some of the newer 3-tiered approaches, tools offer the capability to "drag and drop" application code modules onto different computers

In three-tier architecture the middle tier hosting the components can be moved to another server to balance the load on the server. This easy way of load balancing using drag and drop approach will not be possible with two tier architecture when the load increases. The code need to rewritten to divide it into parts so that the parts can be shifted to other servers to improve performance.

2.0 Industry's response to limitations in the 2-tier architecture

The industry's response to limitations in the 2-tier architecture has been to add a third, middle tier, between the input/output device (PC on your desktop) and the DBMS server. This middle layer can perform a number of different functions - queuing, application execution, database staging and so forth. The use of client/server technology with such a middle layer has been shown to offer considerably more performance and flexibility than a 2-tier approach.
Just to illustrate one advantage of a middle layer, if that middle tier can provide queuing, the synchronous process of the 2-tier approach becomes asynchronous. In other words, the client can deliver its request to the middle layer, disengage and be assured that a proper response will be forthcoming at a later time. In addition, the middle layer adds scheduling and prioritization for the work in process. The use of architecture with such a middle layer is called "3-tier" or "multi-tier". These two terms are largely synonymous in this context.


3.0 Advantages of Three-Tier Architecture

Ease of Development
Ease of Implementation
Less software is on the client.
Resulting application is more scalable.
Less Installation costs.
DBMS- agnostic.
Offer after the fact.
Same interface for desktop and web enabled application.

Ease of Implementation:

Three tier architectures facilitate software development because each tier can be built and executed on a separate platform, thus making it easier to organize the implementation.

Ease of Development:

Three tier architectures readily allow different tiers to be developed in different languages, such as a graphical user interface language or light internet clients (HTML, applets) for the top tier; C, C++, Smalltalk, Basic for the middle tier; and SQL for much of the database tier

Less software is on the client:

When less software is on the client, there is less worry about security since the important software is on a server in a more controlled environment.

Resulting application is more scalable:

The resulting application is more scalable with an application server approach. For one thing servers are far more scalable than PC's. While a server could be a single Pentium based Compaq or Dell, it could also be a symmetric multiprocessing Sequent, with 32 or more processors. Or, it could be a massively parallel UNIX processor like IBM's SP2.

Less Installation costs:

The support and installation costs of maintaining software on a single server is much less than trying to maintain the same software on hundreds or thousands of PC's.

DBMS- agnostic

With a middle application server tier it's much easier to design the application to be DBMS- agnostic. If you want to switch to another DBMS vendor, it's more achievable with reasonable effort with a single multithreaded application than with thousands of applications on PC's.

Offer after the fact:

Most new tools for implementing a 3-tier application server approach offer "after the fact" application partitioning. This means that code and function modules can be reallocated to new servers after this has been built. This offers important flexibility and performance benefits.

Same interface for desktop and web enabled application:

The same interface will be used for building a desktop, single location application or a fully distributed application.

Local Testing

The application can be developed and tested locally and you'll know that it will work fine when it's distributed - you depend on the known services of an object request broker for distribution.


The major downside to an application server approach to client/server computing is that the technology is much more difficult to implement than a 2-tier approach.

Conclusion:

All of the above-described 3-tier approaches could be mixed and matched in various combinatorial sequences to satisfy almost any computing need.
Client/server still remains the only and best architecture for taking advantage of the Internet and other new technologies that come along. We'll have to add "changes in client/server computing" to death and taxes in our inevitable list. But, regardless of what comes, client/server computing is likely to remain the underpinning for most computing developments we'll see over the next decade.



We collect videos & images from online sites like youtube and some other websites which provide them. And most of the News is also gathered from other online websites. Any material downloaded or otherwise obtained through the use of the service is done at your own discretion and risk and that you will be solely responsible for any damage to your computer system or loss of data that results from the download of any such material.
If you feel that the content on the site is illegal or Privacy please contact us at srinuchalla@gmail.com and such videos, images or any Content will be removed with immediate effect.

Wednesday, March 3, 2010

.NET Framework Overview

The .NET Framework is Microsoft's platform for building applications that have visually stunning user experiences, seamless and secure communication, and the ability to model a range of business processes. The .Net Framework consists of:

* Common Language Runtime – provides an abstraction layer over the operating system
* Base Class Libraries – pre-built code for common low-level programming tasks
* Development frameworks and technologies – reusable, customizable solutions for larger programming tasks

By providing you with a comprehensive and consistent programming model and a common set of APIs, the .NET Framework helps you to build applications that work the way you want, in the programming language you prefer, across software, services, and devices.

Secure, Multi-Language Development Platform

Developers and IT professionals can count on .NET as a powerful and robust software development technology that provides the security advancements, management tools, and updates you need to build, test, and deploy highly reliable and secure software. .NET provides a multi-language development platform, so you can work in the programming language you prefer. The Common Language Runtime (CLR) provides support for powerful, static languages like Visual Basic© and Visual C#©, and the advent of the Dynamic Language Runtime (DLR) means that dynamic languages, such as Managed Jscript, IronRuby and IronPython, are also supported.

The .NET Compact Framework is a hardware-independent environment that supports building and running managed applications on resource-constrained computing devices. The .NET Compact Framework inherits the full .NET Framework architecture of the common language runtime and managed code execution, supports a subset of the .NET Framework class library, and contains classes designed exclusively for the .NET Compact Framework.

The .NET Micro Framework provides support for smaller devices as a new part of the entire .NET offering. You can now extend uniformly from very small devices to servers to the cloud using the same programming model and tool chain throughout. Small devices are increasingly part of larger solutions and now with .NET, there is no need to maintain separate staff and resources for the device-related portions of your projects. .NET provides the productivity and standardization that can greatly reduce your time to market. The .NET Micro Framework was built from the start as a solution for the embedded space, so it brings the power of modern computing with the low-level access that is needed to get the job done.
Next-Generation User Experiences

Windows Presentation Foundation (WPF) provides a unified framework for building applications and high-fidelity experiences in Windows that blend together application UI, documents, and media content, while exploiting the full power of the computer. WPF offers developers support for both 2D and 3D graphics, hardware accelerated effects, scalability to different form factors, interactive data visualization, and superior content readability. Further, with a common file format (XAML), designers can become an integral part of the development process by working alongside developers in a workflow that promotes creativity while maintaining full fidelity.

Silverlight, a runtime that contains a subset of the .NET Framework, helps developers expand their reach by providing a cross-browser, cross-platform, and cross-device plug-in for delivering the next generation of .NET-based media experiences, advertising and rich interactive applications (RIAs).
Cutting-Edge Web Application Development

ASP.NET is a free technology that enables Web developers to create anything from small, personal Web sites through to large, enterprise-class dynamic Web applications. Microsoft's free AJAX (Asynchronous JavaScript and XML) framework – ASP.NET AJAX – enables developers to quickly create more efficient, more interactive, and highly personalized Web experiences that work across all of the most popular browsers.
Secure, Reliable Web Services

For service-oriented programming Windows Communication Foundation (WCF) unifies a broad array of distributed systems capabilities in a composable and extensible architecture, spanning transports, security systems, messaging patterns, encodings, network topologies, and hosting models.
Mission-Critical Business Processes

With .NET, developers can use Windows Workflow Foundation (WF) to model a business process with code, enabling closer collaboration between developers and business process owners, and providing end users with better access to data, thereby improving productivity.
Flexible Data Access Options


ADO.NET is a set of classes that expose data access services to the .NET programmer. ADO.NET provides a rich set of components for creating distributed, data-sharing applications. It is an integral part of the .NET Framework, providing access to relational, XML, and application data. ADO.NET supports a variety of development needs, including the creation of front-end database clients and middle-tier business objects used by applications, tools, languages, or Internet browsers.

ADO.NET Entity Framework simplifies application data access by providing an extensible, conceptual model for data from any database and enables this model to closely reflect business requirements.

ADO.NET Data Services provides a first-class infrastructure for the next wave of dynamic internet applications by enabling Web applications to expose data as REST-based data services that can be consumed by client applications in corporate networks and across the internet.