"If you can't explain it simply, you don't understand it well enough" -Albert Einstein

  • Development Simply Put

    If you can't explain it simply, you don't understand it well enough.

    Read More
  • ITWorx

    ITWorx is a global software professional services organization. Headquartered in Egypt, the company offers Portals, Business Intelligence, Enterprise Application Integration and Application Development Outsourcing services to Global 2000 companies.

    Read More
  • Information Technology Institute

    ITI is a leading national institute established in 1993 by the Information and Decision Support Centre.

    Read More

2015-01-30

A Guide For Dealing With Hierarchical, Parent-Child And Tree Form Data Operations



Lately I have been working on more than one project dealing with hierarchical data structures. This encouraged me to try to sum up my experience on this type of structures and corresponding operations.

From time to time I will revisit this post to update it with whatever new I found related to this topic. Please find the assembled posts below and let me know if you have any comments.


How To Apply Recursive SQL Selections On Hierarchical Data
Sometimes we work on systems where a hierarchical data structure exists on some entities like employees and their managers. Both employees and managers can be called employees but there is a self join relation between them as each employee must have a manager. We all faced the situation when we need to get the info about each employee and his/her direct manager. At this point we used to join between the employees (child) table and itself (parent) on the condition that the parent id of the child is equal to the id of the parent. This is good. But, what about if we need to get the hierarchical tree of managers of a certain employee not just his direct manager. It seems as we just need to the same join but more than one time till we are up all the way to the head manager. This is somehow logical but how can we do this number of joins and we don't know the number of levels up to the head manager?!!! If you want to know more, you can read this article.


How To Transform Unsorted Flat Hierarchical Data Structures Into Nested Parent-Child Or Tree Form Objects
Sometimes you have hierarchical data structure presented into an SQL database table where there is a parent-child relation between rows. You may need to transform this flat hierarchical data structure into a parent-child or tree form entity so that you can use it in more advanced or complex business like binding to a tree control or whatever. If you want to know how to do this transformation process, you can read this article.


How To Copy SQL Hierarchical Data At Run-time While Keeping Valid Internal References And Self Joins
Sometimes when you deal with hierarchical data structures you may need to perform internal copy operations. These copy operations should be handled in a smart way as just using INSERT-SELECT statements will mess up the internal references and self joins of the newly inserted records. If you want to know more about this, you can read this article.


For SQL Hierarchical Data, How To Show Only Tree Branches Including Certain Type Of Entities And Discard Others
If you have a tree of entities represented into a SQL database into parent-child relation, you may need to be able to view only the tree branches which include a certain type of entities and discard the branches which don't include this type. If you ever faced such situation or even curious to know how to manage such situation, you can read this article.



That's it. Hope this will help you someday.


2015-01-23

Splitting Daytime Into Chunks To Enhance SQL Bulk Time-based Operations Performance





The best way to understand what this post is about is to start with a real scenario.

One of my colleagues was building a system which controls some motors using some readings coming from electronic sensors. The problem was that the frequency of the sensors readings was so high that the system had to insert a few reading records into the SQL database in few milliseconds. This means that in just a second the system had to insert hundreds or even thousands of records into the database.

This for sure was a big challenge and due to some customer needs and business requirements the approved solution was to compress the sensor readings per a fixed time range at the end of the day. This means that at the end of each day the sensor readings should be divided into groups where each group is limited by a time range of half an hour for example and then the readings of each group should be aggregated by taking the average. This way we will end up having hundreds of readings records into the database instead of thousands or even more. Till now everything is somehow good and logical.

A new challenge aroused while working on the SQL routine which will carry out the bulk aggregation process. The performance of this process was not promising. The process used to take a huge amount of time due to the heavy date-time comparisons and groupings. This was what encouraged me to jump in and try to help.

Before applying any changes the process was going like this; getting all sensor readings which were created between 00:00:00 and 00:15:00 and then taking the average for these readings, then, getting all sensor readings which were created between 00:15:00 and 00:30:00 and then taking the average for these readings, and so on......

As you can see this amount of processing and grouping based on time is huge and this was causing the whole process to be a nightmare performance wise.

One of the best ways to enhance the performance of bulk actions is to try to break the whole one big process into small controllable ones. Then, observe each small process and check whether it, for one record, depends only on the record itself and doesn't need to know anything about other records or not. If this happens, then this small process could be held on single record basis at early stage which could be the moment the record was created or something else based on the business requirements. This will help split the huge processing effort on different moments and stages of the system business life cycle and workflow which will make it more controllable and less recognizable.

Now, to apply the same concept on the problem we have on hand right now we should notice that:
  1. The chunks of time ranges (15 minutes) are the same for all readings and they do not change for any reason unless requested by system user which is done manually or through user interference
  2. The creation date of each sensor reading record is known at the moment the record is created
  3. Most of the processing effort is consumed on the date-time comparisons followed by the groupings
This helped me to decide that:
  1. The chunks of time ranges should be defined only once at the system launch and the results should be kept physically in a table to be used as a quick cached reference
  2. The time chunk to which each sensor reading record belongs should be decided at the moment the record is created
  3. This way each record can have an id of the time chunk it belongs to which will cause the grouping process to be much more easier and effortless


Create Database
USE [master]
GO

CREATE DATABASE [DayChunks] ON  PRIMARY 
( NAME = N'DayChunks', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\DayChunks.mdf' , SIZE = 4096KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
 LOG ON 
( NAME = N'DayChunks_log', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\DayChunks_log.ldf' , SIZE = 1280KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO

ALTER DATABASE [DayChunks] SET COMPATIBILITY_LEVEL = 100
GO

IF (1 = FULLTEXTSERVICEPROPERTY('IsFullTextInstalled'))
begin
EXEC [DayChunks].[dbo].[sp_fulltext_database] @action = 'enable'
end
GO

ALTER DATABASE [DayChunks] SET ANSI_NULL_DEFAULT OFF 
GO

ALTER DATABASE [DayChunks] SET ANSI_NULLS OFF 
GO

ALTER DATABASE [DayChunks] SET ANSI_PADDING OFF 
GO

ALTER DATABASE [DayChunks] SET ANSI_WARNINGS OFF 
GO

ALTER DATABASE [DayChunks] SET ARITHABORT OFF 
GO

ALTER DATABASE [DayChunks] SET AUTO_CLOSE OFF 
GO

ALTER DATABASE [DayChunks] SET AUTO_CREATE_STATISTICS ON 
GO

ALTER DATABASE [DayChunks] SET AUTO_SHRINK OFF 
GO

ALTER DATABASE [DayChunks] SET AUTO_UPDATE_STATISTICS ON 
GO

ALTER DATABASE [DayChunks] SET CURSOR_CLOSE_ON_COMMIT OFF 
GO

ALTER DATABASE [DayChunks] SET CURSOR_DEFAULT  GLOBAL 
GO

ALTER DATABASE [DayChunks] SET CONCAT_NULL_YIELDS_NULL OFF 
GO

ALTER DATABASE [DayChunks] SET NUMERIC_ROUNDABORT OFF 
GO

ALTER DATABASE [DayChunks] SET QUOTED_IDENTIFIER OFF 
GO

ALTER DATABASE [DayChunks] SET RECURSIVE_TRIGGERS OFF 
GO

ALTER DATABASE [DayChunks] SET  DISABLE_BROKER 
GO

ALTER DATABASE [DayChunks] SET AUTO_UPDATE_STATISTICS_ASYNC OFF 
GO

ALTER DATABASE [DayChunks] SET DATE_CORRELATION_OPTIMIZATION OFF 
GO

ALTER DATABASE [DayChunks] SET TRUSTWORTHY OFF 
GO

ALTER DATABASE [DayChunks] SET ALLOW_SNAPSHOT_ISOLATION OFF 
GO

ALTER DATABASE [DayChunks] SET PARAMETERIZATION SIMPLE 
GO

ALTER DATABASE [DayChunks] SET READ_COMMITTED_SNAPSHOT OFF 
GO

ALTER DATABASE [DayChunks] SET HONOR_BROKER_PRIORITY OFF 
GO

ALTER DATABASE [DayChunks] SET  READ_WRITE 
GO

ALTER DATABASE [DayChunks] SET RECOVERY FULL 
GO

ALTER DATABASE [DayChunks] SET  MULTI_USER 
GO

ALTER DATABASE [DayChunks] SET PAGE_VERIFY CHECKSUM  
GO

ALTER DATABASE [DayChunks] SET DB_CHAINING OFF 
GO


Create Tables
USE [DayChunks]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

CREATE TABLE [dbo].[Settings](
 [ID] [int] IDENTITY(1,1) NOT NULL,
 [Name] [nvarchar](max) NOT NULL,
 [Value] [nvarchar](max) NULL,
 CONSTRAINT [PK_Settings] PRIMARY KEY CLUSTERED 
(
 [ID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO





CREATE TABLE [dbo].[DayChunks](
 [ID] [int] IDENTITY(1,1) NOT NULL,
 [ChunkNumOfMinutes] [int] NOT NULL,
 [ChunkNumOfSeconds] [int] NOT NULL,
 [ChunkStart] [time](0) NOT NULL,
 [ChunkEnd] [time](0) NOT NULL,
 [IsLastChunk] [bit] NOT NULL,
 CONSTRAINT [PK_DayChunks] PRIMARY KEY CLUSTERED 
(
 [ID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[DayChunks] ADD  CONSTRAINT [DF_DayChunks_IsLastChunk]  DEFAULT (0) FOR [IsLastChunk]
GO




CREATE TABLE [dbo].[SensorReadings](
 [ID] [int] IDENTITY(1,1) NOT NULL,
 [CreationDateTime] [datetime] NOT NULL,
 [Reading] [float] NOT NULL,
 [DayChunkID] [int] NULL,
 [DayDate] [date] NOT NULL,
 CONSTRAINT [PK_SensorReadings] PRIMARY KEY CLUSTERED 
(
 [ID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]

GO

ALTER TABLE [dbo].[SensorReadings] ADD  CONSTRAINT [DF_SensorReadings_CreationDateTime]  DEFAULT (getdate()) FOR [CreationDateTime]
GO

ALTER TABLE [dbo].[SensorReadings] ADD  CONSTRAINT [DF_SensorReadings_DayDate]  DEFAULT (getdate()) FOR [DayDate]
GO





CREATE TABLE [dbo].[AggregatedSensorReadings](
 [ID] [int] IDENTITY(1,1) NOT NULL,
 [DayChunkID] [int] NULL,
 [Reading] [float] NOT NULL,
 [DayDate] [date] NOT NULL,
 CONSTRAINT [PK_AggregatedSensorReadings] PRIMARY KEY CLUSTERED 
(
 [ID] ASC
)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
) ON [PRIMARY]
GO

Create Routines
USE [DayChunks]
GO


SET ANSI_NULLS ON
GO

IF EXISTS
(
 SELECT * FROM dbo.sysobjects
 WHERE id = object_id(N'[dbo].[SetDayChunksSettings]')
 AND OBJECTPROPERTY(id, N'IsProcedure') = 1
)
 
DROP PROCEDURE [dbo].[SetDayChunksSettings]

GO
CREATE PROCEDURE [dbo].[SetDayChunksSettings]
(
    @ChunksNumOfMinutes INT
 , @ChunksNumOfSeconds INT
 , @ChunksStartTime TIME(0)
)
AS
BEGIN
 DELETE FROM Settings
 WHERE Name IN
 (
  'DayChunksStartTime'
  , 'DayChunksNumOfMinutes'
  , 'DayChunksNumOfSeconds'
 )
 
 INSERT INTO Settings
 (
  Name
  , Value
 )
 VALUES
 (
  'DayChunksStartTime'
  , CAST(@ChunksStartTime AS NVARCHAR(MAX))
 )
 ,
 (
  'DayChunksNumOfMinutes'
  , CAST(@ChunksNumOfMinutes AS NVARCHAR(MAX))
 )
 ,
 (
  'DayChunksNumOfSeconds'
  , CAST(@ChunksNumOfSeconds AS NVARCHAR(MAX))
 )
END
GO





IF EXISTS
(
 SELECT * FROM dbo.sysobjects
 WHERE id = object_id(N'[dbo].[CreateDayChunksBasedOnInput]')
 AND OBJECTPROPERTY(id, N'IsProcedure') = 1
)
 
DROP PROCEDURE [dbo].[CreateDayChunksBasedOnInput]

GO
CREATE PROCEDURE [dbo].[CreateDayChunksBasedOnInput]
(
    @ChunksNumOfMinutes INT
 , @ChunksNumOfSeconds INT
 , @ChunksStartTime TIME(0)
)
AS
BEGIN
 DECLARE @MaxEnd TIME(0) = DATEADD(minute, (-1 * @ChunksNumOfMinutes), @ChunksStartTime)
 SET @MaxEnd = DATEADD(second, (-1 * @ChunksNumOfSeconds), @MaxEnd)

 DECLARE @ChunkStart TIME(0) = @ChunksStartTime

 DECLARE @ChunkEnd TIME(0) = DATEADD(minute, @ChunksNumOfMinutes, @ChunksStartTime)
 SET @ChunkEnd = DATEADD(second, @ChunksNumOfSeconds, @ChunkEnd)
 
 TRUNCATE TABLE DayChunks
 
 WHILE (@ChunkEnd < @MaxEnd)
 BEGIN
  INSERT INTO DayChunks
  (
   ChunkNumOfMinutes
   , ChunkNumOfSeconds
   , ChunkStart
   , ChunkEnd
  )
  VALUES
  (
   @ChunksNumOfMinutes
   , @ChunksNumOfSeconds
   , @ChunkStart
   , @ChunkEnd
  )
     
  SET @ChunkStart = DATEADD(minute, @ChunksNumOfMinutes, @ChunkStart)
  SET @ChunkStart = DATEADD(second, @ChunksNumOfSeconds, @ChunkStart)
     
  SET @ChunkEnd = DATEADD(minute, @ChunksNumOfMinutes, @ChunkEnd)
  SET @ChunkEnd = DATEADD(second, @ChunksNumOfSeconds, @ChunkEnd)
 END

 IF(@ChunkEnd = @MaxEnd OR @ChunkEnd > @MaxEnd)
 BEGIN
  IF(@ChunkEnd > @MaxEnd)
  BEGIN
   SET @ChunkEnd = @MaxEnd
  END
  
  INSERT INTO DayChunks
  (
   ChunkNumOfMinutes
   , ChunkNumOfSeconds
   , ChunkStart
   , ChunkEnd
  )
  VALUES
  (
   @ChunksNumOfMinutes
   , @ChunksNumOfSeconds
   , @ChunkStart
   , @ChunkEnd
  )
  
  INSERT INTO DayChunks
  (
   ChunkNumOfMinutes
   , ChunkNumOfSeconds
   , ChunkStart
   , ChunkEnd
   , IsLastChunk
  )
  VALUES
  (
   @ChunksNumOfMinutes
   , @ChunksNumOfSeconds
   , @MaxEnd
   , @ChunksStartTime
   , 1
  )
 END
END
GO





IF EXISTS
(
 SELECT * FROM dbo.sysobjects
 WHERE id = object_id(N'[dbo].[CreateDayChunksBasedOnSettings]')
 AND OBJECTPROPERTY(id, N'IsProcedure') = 1
)
 
DROP PROCEDURE [dbo].[CreateDayChunksBasedOnSettings]
GO

CREATE PROCEDURE [dbo].[CreateDayChunksBasedOnSettings]
AS
BEGIN
 DECLARE @ChunksNumOfMinutes INT
 DECLARE @ChunksNumOfSeconds INT
 DECLARE @ChunksStartTime TIME(0)
 
 IF EXISTS (SELECT TOP 1 ID FROM Settings WHERE Name = 'DayChunksNumOfMinutes')
 BEGIN
  SELECT TOP 1 @ChunksNumOfMinutes = 
   CASE
    WHEN Value IS NULL THEN CAST(0 AS INT)
    ELSE CAST (Value AS INT)
   END
  FROM Settings
  WHERE Name = 'DayChunksNumOfMinutes'
 END
 ELSE
 BEGIN
  SET @ChunksNumOfMinutes = 0
 END

 IF EXISTS (SELECT TOP 1 ID FROM Settings WHERE Name = 'DayChunksNumOfSeconds')
 BEGIN
  SELECT TOP 1 @ChunksNumOfSeconds = 
   CASE
    WHEN Value IS NULL THEN CAST(0 AS INT)
    ELSE CAST (Value AS INT)
   END
  FROM Settings
  WHERE Name = 'DayChunksNumOfSeconds'
 END
 ELSE
 BEGIN
  SET @ChunksNumOfSeconds = 0
 END

 IF EXISTS (SELECT TOP 1 ID FROM Settings WHERE Name = 'DayChunksStartTime')
 BEGIN
  SELECT TOP 1 @ChunksStartTime = 
   CASE
    WHEN Value IS NULL THEN CAST('00:00:00' AS TIME(0))
    ELSE CAST (Value AS TIME(0))
   END
  FROM Settings
  WHERE Name = 'DayChunksStartTime'
 END
 ELSE
 BEGIN
  SET @ChunksStartTime = CAST('00:00:00' AS TIME(0))
 END
 
 EXEC [dbo].[CreateDayChunksBasedOnInput] @ChunksNumOfMinutes, @ChunksNumOfSeconds, @ChunksStartTime
END
GO





SET ANSI_NULLS ON
GO

IF EXISTS
(
 SELECT *
 FROM INFORMATION_SCHEMA.ROUTINES
 WHERE ROUTINE_NAME = 'GetDateTimeDayChunkID'
 AND ROUTINE_SCHEMA = 'dbo'
 AND ROUTINE_TYPE = 'FUNCTION'
)
 
DROP FUNCTION [dbo].[GetDateTimeDayChunkID]
GO

CREATE FUNCTION [dbo].[GetDateTimeDayChunkID] 
(
 @InputDateTime DATETIME
)
RETURNS INT
AS
BEGIN
 DECLARE @Result INT
 DECLARE @InputTime TIME(0) = @InputDateTime
 
 SELECT TOP 1 @Result = ID
 FROM DayChunks
 WHERE @InputTime BETWEEN ChunkStart AND ChunkEnd
 
 IF(@Result IS NULL)
 BEGIN
  SET @Result = (SELECT TOP 1 ID FROM DayChunks WHERE IsLastChunk = 1)
 END
 
 RETURN @Result
END
GO

Testing Solution
The script below tests the solution by simulating sensor readings every one second and the bulk aggregation process is held every one minute.
USE [DayChunks]
GO


-- Setting day chunks settings
-- DayChunksStartTime = '00:00:00'
-- DayChunksNumOfMinutes = 1
-- DayChunksNumOfSeconds = 0
-- This means that the day starts at 00:00:00 and is splitted into chunks each is 1 minute long
EXEC [dbo].[SetDayChunksSettings] 1, 0, '00:00:00'

-- Splitting the day into proper chunks based on the settings set in the previous step
EXEC [dbo].[CreateDayChunksBasedOnSettings]
GO


-- Simulating sensor readings every one second
TRUNCATE TABLE SensorReadings
GO

DECLARE @i INT;
DECLARE @SensorReading FLOAT
DECLARE @Now DATETIME

SET @i = 1;
WHILE (@i <= 90)
BEGIN
 WAITFOR DELAY '00:00:01'
 
 DECLARE @CreationDateTime1 DATETIME = GETDATE()
 DECLARE @RandomReading1 FLOAT
 SELECT TOP 1 @RandomReading1 = RAND()
 
 INSERT INTO SensorReadings
 (
  Reading
  , CreationDateTime
  , DayChunkID
 )
 VALUES
 (
  @RandomReading1
  , @CreationDateTime1
  , [dbo].[GetDateTimeDayChunkID](@CreationDateTime1)
 )
 
 SET  @i = @i + 1;
END


SELECT *
FROM SensorReadings


-- Applying aggregation on sensor readings
TRUNCATE TABLE AggregatedSensorReadings
GO

INSERT INTO AggregatedSensorReadings
(
 DayChunkID
 , Reading
 , DayDate
)
SELECT DayChunkID
, AVG(Reading)
, DayDate
FROM SensorReadings
GROUP BY DayDate, DayChunkID


SELECT AggregatedSensorReadings.*
, DayChunks.ChunkStart
, DayChunks.ChunkEnd
FROM AggregatedSensorReadings
LEFT OUTER JOIN DayChunks
ON DayChunks.ID = AggregatedSensorReadings.DayChunkID

Result

 Settings


DayChunks 1




DayChunks 2


SensorReadings 1


SensorReadings 2


AggregatedSensorReadings



That's it. This is just an example of an application of the main concept of splitting day time into chunks. You can still use this concept and adapt the code to your business needs.


Hope this will help you someday.
Good Luck.


2014-09-17

How To Develop/Adjust ASP.NET User Controls For Multiple Instances Support




When developing ASP.NET user controls you should keep in mind whether they need to support multiple instances feature or not. In other words, you should decide if more than one instance of your user control could be added on the same page. Why do you need to make up your mind on this? ...... wait and see.


Code Sample
You can download the code sample from here


Assume that you need to develop a user control which is simply a text box and a clear button. Once the clear button is clicked the text inside the text box should be cleared. Also, our user control should support multiple instances feature.

We will create the solution as in the image below.


Here we have two approaches to implement our user control; a BAD approach which will cause some issues as we will see and another GOOD approach which will work perfectly. So we will start with the bad approach to fully understand why we need the good one.


Bad Approach

TextWithClear.ascx:
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="TextWithClear.ascx.cs" Inherits="DevelopmentSimplyPut.MultiInstanceUserControl.Controls.TextWithClear" %>

<script type="text/javascript" src="../Scripts/jquery-1.11.0.min.js"></script>

<div style="border-color:red;border-width:thin;border-style:solid;width:20%;">
    <input id="txt_MyTextBox" name="txt_MyTextBox" type="text" value="" />
    <br />
    <input id="btn_Clear" type="button" value="Clear" onclick = "Clear();" />
</div>

<script type="text/javascript">
    function Clear() {
        $("#txt_MyTextBox").val("");
    }
</script>

Home.aspx:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Home.aspx.cs" Inherits="DevelopmentSimplyPut.MultiInstanceUserControl.Home" %>

<%@ Register Src="~/Controls/TextWithClear.ascx" TagPrefix="uc1" TagName="TextWithClear" %>


<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <uc1:TextWithClear runat="server" id="TextWithClear1" />
        <br />
        <uc1:TextWithClear runat="server" id="TextWithClear2" />
    </div>
    </form>
</body>
</html>


After running the application, we will get the image below.


As we can see both text boxes have the same id. Clicking the "Clear" button of the first instance will cause the first text box to be cleared as in the image below.


Unfortunately clicking the "Clear" button of the second instance also causes the first text box (not the second text box) to be cleared as in the image below.


This happened as both text boxes have the same id, so the javascript code selecting the text boxes by id will always return the first one only and then apply the clear action. That is why clicking the "Clear" button of both user control instances will always clear the first text box.

This is the time to try the good approach.


Good Approach

TextWithClear.ascx:
<%@ Control Language="C#" AutoEventWireup="true" CodeBehind="TextWithClear.ascx.cs" Inherits="DevelopmentSimplyPut.MultiInstanceUserControl.Controls.TextWithClear" %>

<script type="text/javascript" src="../Scripts/jquery-1.11.0.min.js"></script>

<div id="MainContainer" runat="server" style="border-color:red;border-width:thin;border-style:solid;width:20%;">
    <input id="txt_MyTextBox" name="txt_MyTextBox" type="text" value="" />
    <br />
    <input id="btn_Clear" type="button" value="Clear" onclick = "GetCurrentTextWithClearControlManager<%= this.ClientID %>().Clear();" />
</div>

<script type="text/javascript">
    function TextWithClearControlManager(_controlClientId) {
        this.ControlClientId = _controlClientId;
        this.GetMainContainerDomElement = function GetMainContainerDomElement() {
            return $("div[id^=" + this.ControlClientId + "][id$=MainContainer]").eq(0);
        };
        this.Clear = function Clear() {
            this.GetMainContainerDomElement().find("#txt_MyTextBox").val("");
        };
    }

    function GetCurrentTextWithClearControlManager<%= this.ClientID %>() {
        return new TextWithClearControlManager('<%= this.ClientID %>');
    }
</script>

Home.aspx:
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Home.aspx.cs" Inherits="DevelopmentSimplyPut.MultiInstanceUserControl.Home" %>

<%@ Register Src="~/Controls/TextWithClear.ascx" TagPrefix="uc1" TagName="TextWithClear" %>


<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <uc1:TextWithClear runat="server" id="TextWithClear1" />
        <br />
        <uc1:TextWithClear runat="server" id="TextWithClear2" />
    </div>
    </form>
</body>
</html>


After running the application, we will get the image below.


As we can see both text boxes have the same id but each outer container div of each user control has its own unique id which is composed from the id of the user control and the static id which was given to the div. Clicking the "Clear" button of the first instance will cause the first text box to be cleared as in the image below.


Now clicking the "Clear" button of the second instance causes the second text box to be cleared as in the image below.


This is good news but what really happened here?

Steps
  1. Wrapped our user control into a main container DOM element or used an already existing one as in our case here
  2. Gave it an id which is "MainContainer" in our case
  3. Marked it as a server control by adding "runat="server""
  4. Added a javascript function which acts as a constructor for a manager object. This manager object is user control instance related. It controls and serves only the user control instance whose client id is passed to its constructor
  5. Inside this manager object we defined a function called "GetMainContainerDomElement" which is responsible for returning the main container DOM element of its corresponding user control. In our case here it returns the outer "MainContainer" div of corresponding user control
  6. Also inside this manager object we defined the clear function but this time we make use of the "GetMainContainerDomElement" function to focus on the current outer div of the user control corresponding to the current manager object. Then, we select the proper text box which is a child of this main div, not another div. This way we make sure that all actions will be applied on the corresponding user control instance without affecting any other instances
  7. Created the function whose name is composed of "GetCurrentTextWithClearControlManager" followed by the client id of the user control. This is to make sure that the function "GetCurrentTextWithClearControlManager" of all user control instances will not replace and overwrite each other as they are finally on the same page. This function is used to return the manager object of each user control instance to be used inside DOM elements tags. In our case we used it as follows "onclick = "GetCurrentTextWithClearControlManager<%= this.ClientID %>().Clear();""
Now each "Clear" button accesses its corresponding manager object and fires the right "Clear" function.


That's it. Hope this will help you someday.
Good luck.



2014-06-21

How To Access ASP.NET Web.config AppSettings On Client-Side Javascript



You can download the code presented into this post from here


There are many ways by which you can access your ASP.NET web application web.config application settings through your client-side javascript code. The common thing between all of these ways is that to do so you for sure need to access the server-side.

The most proper way I prefer is to load all your application settings in a batch the first time your page is loaded and I think the best practice here is to use a handler to achieve this task. The handler will be responsible for accessing the web.config and retrieving all the application settings keys and their corresponding values and finally returning the response as a javascript file to be loaded once the handler is requested.

This way all what you have to do is to include the handler as a javascript resource on your page or master page then once the page is loaded you will have all your application settings as javascript variables.

Let's have a look on the code below to see what I am talking about.



Web.config
<?xml version="1.0"?>

<!--
  For more information on how to configure your ASP.NET application, please visit
  http://go.microsoft.com/fwlink/?LinkId=169433
  -->

<configuration>
    <system.web>
      <compilation debug="true" targetFramework="4.5" />
      <httpRuntime targetFramework="4.5" />
    </system.web>

  <appSettings>
    <add key="SampleSetting" value="This is the setting value"/>
  </appSettings>
  
</configuration>

ClientGlobalVars.ashx.cs
using System;
using System.Collections.Generic;
using System.Configuration;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Web;
using System.Web.SessionState;
using System.Collections.Specialized;

namespace DevelopmentSimplyPut.Handlers
{
    public class ClientGlobalVars : IHttpHandler, IRequiresSessionState 
    {
        public void ProcessRequest(HttpContext context)
        {
			context.Response.ClearHeaders();
			context.Response.ContentType = "application/x-javascript";
			context.Response.Cache.SetCacheability(HttpCacheability.Public);
			context.Response.CacheControl = Convert.ToString(HttpCacheability.Public);
            context.Response.Write("var AppSettings = new Object();\n");

            NameValueCollection appSettings = ConfigurationManager.AppSettings;

            for (int i = 0; i < appSettings.Count; i++)
            {
                string key = appSettings.GetKey(i);
                string value = appSettings.Get(i);
                context.Response.Write(string.Format(CultureInfo.InvariantCulture, "AppSettings.{0} = '{1}';\n", key, value));
            }
        }

        public bool IsReusable
        {
            get
            {
                return false;
            }
        }
    }
}

ClientGlobalVars.ashx
<%@ WebHandler Language="C#" CodeBehind="ClientGlobalVars.ashx.cs" Class="DevelopmentSimplyPut.Handlers.ClientGlobalVars" %>

Default.aspx
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="AccessAppSettingsFromJs.Default" %>

<!DOCTYPE html>

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
    <script type="text/javascript" src="/Handlers/ClientGlobalVars.ashx"></script>

    <script>
        alert(AppSettings.SampleSetting);
    </script>
</head>
<body>
    <form id="form1" runat="server">
    <div>
    </div>
    </form>
</body>
</html>

So now once you open Default.aspx in a web browser you get the result in the image below.



That's it. You can now access your application settings from client-side javascript.


2014-06-16

Paging Concept - The Main Equations To Make It Easy




The paging concept is used in many fields that it is even used in our daily lives. When you have a set of items and you want to divide them equally between some sort of containers or groups, you are thinking of paging but may be you don't recognize it.

The aim of this post is to explain some mathematical equations which can make it easy for you to implement the paging concept. If you are expecting to find explanations for the paging concept on specific applications like operating systems memory management or file system or whatever, then you are reading the wrong article.

The best way to explain paging is to apply on an example. Let's assume that we have a collection of 10 items and we want to divide these 10 items into groups where each group contains 3 items.

If we apply the calculations manually, we will have the items distributed as in the image below.


This was easy as the items count is not that big but this is not always the case. Also, we need to come up with the mathematical operation or equation which can be used to carry out the paging task automatically or through code.

After some analysis you will find that the mathematical relation will end up as in the image below.


The equation states that when we divide the "Item Index" on "Page Size", we get the "Page Index" and the remainder will be the "Item Index Per Page". Let's apply this mathematical equation on the example we have on hand right now.


When we divided "Item Index = 2" (Third item) on "Page Size = 3" we got "Page Index = 0" and "Item Index Per Page = 2". This means that the third item is the third item on the first page.

Also, when we divided "Item Index = 3" (Fourth item) on "Page Size = 3" we got "Page Index = 1" and "Item Index Per Page = 0".  This means that the fourth item is the first item on the second page.

Also, when we divided "Item Index = 7" (Eighth item) on "Page Size = 3" we got "Page Index = 2" and "Item Index Per Page = 1".  This means that the eighth item is the second item on the third page.

Also, when we divided "Item Index = 9" (Tenth item) on "Page Size = 3" we got "Page Index = 3" and "Item Index Per Page = 0".  This means that the tenth item is the first item on the fourth page.


So, we can transform the equation into the shape below:

Item Index = (Page Index * Page Size) + Item Index Per Page


This means that if we have a value for "Page Index" and a value for "Page Size" and we need to know the index of the first item and the last item on this page we can use the equation above as follows.

First Item Index = (Page Index * Page Size) + Min Item Index Per Page
                             = (Page Index * Page Size) + 0
                             = (Page Index * Page Size)

Last Item Index = (Page Index * Page Size) + Max Item Index Per Page
                            = (Page Index * Page Size) + (Page Size - 1)

But note that if the calculated "Last Item Index" is greater than the index of the last item in the whole collection, then take the smaller number which is the index of the last item in the whole collection.


To verify the equations above let's apply on the example we have on hand.

On the first page, (first item index = 0 * 3 = 0) and (last item index = (0 * 3) + (3 - 1) = 2)
On the second page, (first item index = 1 * 3 = 3) and (last item index = (1 * 3) + (3 - 1) = 5)
On the third page, (first item index = 2 * 3 = 6) and (last item index = (2 * 3) + (3 - 1) = 8)
On the fourth page, (first item index = 3 * 3 = 9) and (last item index = (3 * 3) + (3 - 1) = 11 which is greater than the max available item index (9), therefore, last item index = 9)


That's it, as you can see these equations can make your life much easier.
Goodbye.


2014-01-13

For SQL Hierarchical Data, How To Show Only Tree Branches Including Certain Type Of Entities And Discard Others


SQL Hierarchical Data

If you have a tree of entities represented into a SQL database into parent-child relation as in the image above, you may need to be able to view only the tree branches which include a certain type of entities
and discard the branches which don't include this type.

For example, let's have a look on the tree above and assume that we need to show only the branches which include the "J" type of entities. For this to happen, then we need to transfer the above tree into the one below.

SQL Hierarchical Data

How to do this in SQL?
To be able to answer this question, we first need to know how our mind does it. Our mind does a lot of processing and computing in an efficient way which makes us think that it is so easy, but is it that easy?

Assume that we have these types of entities in our tree.


And the real entities instances in our database are as in the image below.


Which means that our entities are in the tree form as in the image below.


Now, let's assume that we need to view only the tree branches including the "J" type. To do this manually just by a piece of paper and a pen, we look at the leaf entities in the tree. By leaf I mean the entities which have no children. Then, up from there we trace each branch and if a branch doesn't include a "J", we erase this branch till we finally get the final tree.

To do this in a more systematic approach, we can first locate our leaf entities. Then, we go up one level for each one of the leaf entities, then another one level up and so on till we reach the top of the tree. This could be illustrated as in the table below.


As you can see, we first started with the leaf entities and it took us five iterations to reach the top most of the tree. But why do we need to do that?

We did that because we already know the sequence of the system entities types, for example we know that if the leaf entity of a branch is "T", then this branch will not have a "J" as "J" comes after "T". So, we needed to trace back each branch to decide which branch to keep and which to discard.

Does that mean that every branch doesn't end with "J" should be completely deleted up to the top of the tree? No, as you can see in our tree above, there is a branch which ends with "T4", we are sure that this branch will not include a "J" but if we delete this branch up to the "C" entity, we will lose "P2" which has valid "J" children.

So, to get it right you can think of it as a voting system. Each entity in the tree should vote on whether its parent should be kept or no. Finally, we sum the voting on each parent and know if at least one child needs this parent, if not, we can discard this parent without any problems or loosing any valid branches.

So, each entity of the leaf entities should already know if its parent should be kept or not, so in our example, if the leaf entity is a "J", then its parent should be voted a "1". But if the leaf entity is a "S" or "T" or "P" or "C" then the parent should be voted a "0".

So, this leads us to the result in the image below.


Now, we need to sum the votes of each entity to know which entities to keep and which to discard. After doing the summation, we will get the result as in the image below.


So, to make sure the logic we applied here is valid, let's return to the tree diagram and highlight the entities which got a sum of "0" votes. These entities are decided by all child entities to be useless and disposable.

SQL Hierarchical Data

Are these the entities we can delete from the tree to make sure every branch includes a "J" entity? Yes, those are the ones and by deleting them we get the final tree as in the image below.


So, now we are sure that the logic we applied is valid but can we apply this complex logic in SQL?

Applying the same logic in SQL

--Creating table structure
DECLARE @AllEntities AS Table
(
 Id INT
 , NAME NVARCHAR(100)
 , ParentId INT
 , [Type] NVARCHAR(10)
)


--Inserting sample data as in the example supplied
INSERT INTO @AllEntities
(
 Id
 , NAME
 , ParentId
 , [Type]
)
VALUES
(1, 'C', NULL, 'C')
, (2, 'P1', 1, 'P')
, (3, 'P2', 1, 'P')
, (4, 'T1', 2, 'T')
, (5, 'T2', 2, 'T')
, (6, 'T3', 2, 'T')
, (7, 'T4', 3, 'T')
, (8, 'T5', 3, 'T')
, (9, 'S1', 4, 'S')
, (10, 'S2', 4, 'S')
, (11, 'S3', 5, 'S')
, (12, 'S4', 6, 'S')
, (13, 'S5', 6, 'S')
, (14, 'S6', 8, 'S')
, (15, 'S7', 8, 'S')
, (16, 'J1', 10, 'J')
, (17, 'J2', 10, 'J')
, (18, 'J3', 11, 'J')
, (19, 'J4', 11, 'J')
, (20, 'J5', 14, 'J')
, (21, 'J6', 14, 'J')


--Declaring a variable to hold the required ensured entity type
--Each branch in the tree should include this entity type, otherwise,
--the whole branch will be excluded from the final result
DECLARE @EnsuredEntityType AS NVARCHAR(10)
SET @EnsuredEntityType = 'J'


;WITH EnsuredEntityTree(Id, ParentId, Voting) AS
(
 --Strating with the leaf entities on the tree as the seed
 SELECT parent.ObjectID
 , parent.ParentObjectID
 , CASE
 WHEN 
 (
  (@EnsuredEntityType = 'J' AND (parent.Type = 'J'))
  OR
  (@EnsuredEntityType = 'S' AND (parent.Type = 'S' OR parent.Type = 'J'))
  OR
  (@EnsuredEntityType = 'T' AND (parent.Type = 'T' OR parent.Type = 'S' OR parent.Type = 'J'))
  OR
  (@EnsuredEntityType = 'P' AND (parent.Type = 'P' OR parent.Type = 'T' OR parent.Type = 'S' OR parent.Type = 'J'))
  OR
  (@EnsuredEntityType = 'C' AND (parent.Type = 'C'))
 ) THEN 1
 ELSE 0 END AS Voting
 FROM @AllEntities as parent
 LEFT OUTER JOIN @AllEntities as children
 ON children.ParentId = parent.Id
 WHERE children.Id IS NULL

 UNION ALL

 SELECT parent.Id
 , parent.ParentId
 , et.Voting --the same voting from the original seed object
 FROM @AllEntities AS parent
 INNER JOIN EnsuredEntityTree AS et
 ON et.ParentId = parent.Id
)

SELECT DISTINCT Id, ParentId
FROM EnsuredEntityTree
GROUP BY Id, ParentId
HAVING SUM(Voting) > 0


Finally, I hope you find this useful someday.
Good luck.



2013-12-27

ASP.NET Viewstate And Controlstate Performance Enhancements - Saving Viewstate And Controlstate On Server

ASP.NET Viewstate & Controlstate

Viewstate and Controlstate are used in ASP.NET pages to keep states of pages and all controls on them between postbacks. ASP.NET framework saves these states in a container to be used throughout postbacks to maintain these states.

Also, you can explicitly add any info you need to keep between postbacks inside the page viewstate. This may seem urging but you must know that this doesn't come free of charge. Yes viewstate and controlstate are very useful and powerful but they have to be used wisely otherwise you will greatly affect your system performance badly and in an unpleasant way.

The viewstate and controlstate are both saved by the server and then retrieved to keep your page and controls state. The default behavior is that these states are saved into a hidden field on the page so that at postbacks the server will be able to read these states back from the hidden field and retrieve the sates prior the postbacks. Is this good?

This article is not meant to be a full guide about viewstate and controlstate, if you need a full guide you can have a look on the references at the end of this article. So, what is this article about?

This article will focus on how to overcome the drawbacks of heavy viewstate and controlstate and enhance the system performance by saving these states on server rather than sending them back and forth between client and server throughout postbacks.

Analysis
To know how viewstate and controlstate work, you need to know some points in brief:
  1. HTTP is stateless which means that it doesn't support by itself saving the states of requests and responses. That is why each web development platform should handle the states by its own way and system when needed
  2. For ASP.NET, when a request is initiated, the server process the request and builds the whole page and sends it back to the client. At this moment, the server forgets about the whole page object and all info related to the request. This is what is meant by stateless
  3. ASP.NET has its own way of saving the page states. It exposes some methods/events by which you can control how the page state will be saved and then retrieved, but if you didn't override these methods and provide your own implementation, there is always a default behavior which ASP.NET will use to save the states
  4. The default behavior for ASP.NET to save page states is saving/loading them into/from a hidden field on the page
  5. The states the server tends to save are the states of all the page controls before being sent back to the client. At the successive requests, the server can now retrieve these sates to know how the page looked like before the system user applied some changes on it at the client side 
But why the hassle?
I asked myself before why at every request the server needs to know the states on which the page was before the last response, does it really matter? As far as I know when a request is performed the form will be submitted to the server and the server will have all the info required to re-create and re-populate the form fields in the response, so for God's sake why????

Misunderstood about viewstate
Some developers think that viewstate is used to keep values and states of page controls so that the server is able to populate these values and states after postbacks. This is wrong. Believe me even if you disabled the viewstate on a page and its controls the values you entered in the controls will still exist after postbacks. You don't believe me, try it yourself.

Create a web application, add a page and disable viewstate on it, add a server textbox control and make it run at server, add a server button control and make it run at server. Now start the application and enter some text inside the textbox and click on the button. A postback will be performed and the textbox will be populated with the text you enterd yourself before the postback. How? this happened because when you clicked the button the whole form is submitted including the text you entered inside the textbox. So, the server didn't need anything to know the value you entered as it was already sent to it with the request. That's why I told you before viewstate is not responsible for keeping controls values and states.

Believe it or not, one of the main purposes of the viewstate is to track changes made on a page controls. Why keep track of changes? to be able to properly fire events like "ontextchanged" which are based on tracking changes made on a control to properly apply your custom code for handling such situations. Still not convinced? If yes, try the following example.

The proof
Try this:
  1. Create a web application
  2. Add a page and enable viewstate on it
  3. Add the following markup inside the form tag
    <asp:TextBox ID="box" runat="server" Text="" EnableViewState="true" ontextchanged="box_TextChanged"></asp:TextBox>
    <asp:Button ID="btn" runat="server" Text="Do Postback" onclick="btn_Click" />
    
  4. Write this code on the code behind inside the page class
    protected void btn_Click(object sender, EventArgs e)
    {
    }
    
    protected void box_TextChanged(object sender, EventArgs e)
    {
    }
    
  5. Put a breakpoint on the "box_TextChanged" event
  6. Run the application in debug mode
  7. Write "Test Test" in the textbox
  8. Click "Do Postback" button
  9. You will reach the breakpoint, hit F5 to return to client-side
  10. Delete "Test Test" from the textbox and leave it empty
  11. Click "Do Postback" button
  12. You will reach the breakpoint, hit F5 to return to client-side
  13. Stop debugging and disable the viewstate on the page
  14. Disable the viewstate on the textbox so that the markup will be as follows
    <asp:TextBox ID="box" runat="server" Text="" EnableViewState="false" ontextchanged="box_TextChanged"></asp:TextBox>
    <asp:Button ID="btn" runat="server" Text="Do Postback" onclick="btn_Click" />
    
  15. Put a breakpoint on the "box_TextChanged" event
  16. Run the application in debug mode
  17. Write "Test Test" in the textbox
  18. Click "Do Postback" button
  19. You will reach the breakpoint, hit F5 to return to client-side
  20. Notice that the textbox text is "Test Test", even without viewstate!!!
  21. Delete "Test Test" from the textbox and leave it empty
  22. Click "Do Postback" button
  23. You will not reach the breakpoint, even when the text is changed from "Test Test" to ""!!!
Confused? You have the right to. Here is what happened:
  1. When you created the textbox using the markup, the default value of the textbox is empty or ""
  2. At the first load of the page, the server loaded the textbox with its default value which was set into the markup which was "" in our case
  3. Since this was the first page load, the server already knew that whatever viewstate was enabled or disabled it would not matter as the page was in its default state
  4. At client-side, when you entered "Test Test" inside the textbox then followed by postback the server created the whole page and its controls from scratch
  5. So, the first step was to create the textbox and pre-populate it with its default value which is "" in our case
  6. At this point the viewstate may have played a role, so:
    1. When viewstate was enabled:
      1. The server checked if any viewstate was saved from before
      2. In this case, no viewstate was saved because the previous load was the first page load as we stated above in step #3
      3. So, the textbox text was not changed and it stayed ""
    2. When viewstate was disabled:
      1. The textbox text was not changed and it stayed ""
  7. Server loaded the new textbox text from the submitted form, so in our case it was found to be "Test Test"
  8. Server set the textbox text to the value retrieved in the previous step which is "Test Test"
  9. Server compared the textbox value from step #6 and #8 and figured out that the value has changed from "" to "Test Test", so the server fired the "box_TextChanged" event
  10. Before rendering the page, the server had something to do:
    1. When viewstate was enabled:
      1. The server saved the viewsate of the page controls, so the textbox state was saved and the saved value of the textbox was "Test Test"
    2. When viewstate was disabled:
      1. No state was saved
  11. Back again at client side, when you cleared the textbox text and performed a postback, the server created the whole page and its controls from scratch
  12. So, the first step was to create the textbox and pre-populate it with its default value (from the markup) which is "" in our case
  13. At this point the viewstate may have played a role, so:
    1. When viewstate was enabled:
      1. The server checked if any viewstate was saved from before
      2. In this case, viewstate was found and the saved texbox value was "Test Test"
      3. So, the server re-populated the textbox with its previous value which was saved in the viewstate, in our case, "Test Test""
    2. When viewstate was disabled:
      1. The textbox text was not changed and it stayed ""
  14. Server loaded the new textbox text from the submitted form, so in our case it was found to be ""
  15. Server set the textbox text to the value retrieved in the previous step which is ""
  16. Server compared the textbox value from step #13 and #15 to check if any changes had been applied on the textbox, so:
    1. When viewstate was enabled:
      1. A change had been applied from "Test Test" to ""
      2. The server fired the "box_TextChanged" event
    2. When viewstate was disabled:
      1. No change had been applied as both values are ""
      2. The server didn't fire the "box_TextChanged" event
That's it, I think you now got it right, right?

The last thing to mention here is

ASP.NET Page Lifecycle

Why to save viewstate and controlstate on server?
Now after we have understood what viewstate and controlstate are about, let's discuss something. We said before that ASP.NET has a default way or approach to save and load viewstate and controlstate if not other approach is set by the system developer. This default approach is saving and loading states into and from a hidden field on the page. Is this good? may be it is good for some cases but if your page controls are complex or many or you are stuffing too many objects into the viewstate, this will make the viewstate and controlstate large in size and in this case the hidden field will take too much size on the page. This will eventually cause the response size to be large. I think this is enough for a reason on why to try to find another approach for saving and loading states.

How to save viewstate and controlstate on server?
To control the way ASP.NET will save and load your page states, you need to override two events on the page class but before going deep into code let's highlight some points first.

The whole idea here is to save the states on a text file on the server. This way the server will not have to send the states back and forth between server and client which makes the request and response sizes smaller and the whole application performance better.

So, for this approach to work as it should, a state file should be created for every user so that users will not share states. This could be handled using session ids as the file names or something like that. This will work because we know that session ids are unique for all users and it is impossible for two users to have the same session id.

Problem
This is good but there is a problem with this approach. We said that session ids are unique for all users and that every user will have his own unique session id, but, for the same user, if he opens more than one page of the application on more than one tab, all these pages and tabs will share the same session id. So, now we have to differentiate between the states of pages even for the same user because we don't want to load the states of page A to page B.

Solution
We have to define an id for each page a user opens, so that the combination of this id with the user session id form a unique page id. This combined id will be used as the state file id. To do this, we will generate a unique page id at the page first load and save this id on a hidden field on the page.

Building the whole solution
As we said before, there are two events to override on the page class:
  1. The "SavePageStateToPersistenceMedium" event which is fired when the server saves the page states before the page is rendered
  2. The "LoadPageStateFromPersistenceMedium" event which is fired when the server loads the saved states from the previous response
So, we will create our own class derived from the "Page" class and customize it to look as in the code below.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Web.UI;
using DevelopmentSimplyPut;
using System.IO;
using System.Web.UI.WebControls;

namespace DevelopmentSimplyPut.CustomStatePreservePages
{
    public class InFileStatePreservePage : Page
    {
        private string pageId;
        public string PageId
        {
            get
            {
                string result = "";

                if (!string.IsNullOrEmpty(pageId))
                {
                    result = pageId;
                }
                else
                {
                    result = Request.Form["hdnPageId"];
                }

                return result;
            }
        }

        public string StatePreserveFilesFolderPath
        {
            get
            {
                return Path.Combine(Request.PhysicalApplicationPath, Constants.StatePreserveFilesFolderName);
            }
        }

        protected void Page_Load(object sender, EventArgs e)
        {
            if(!IsPostBack)
            {
                pageId = Session.SessionID.ToString() + Guid.NewGuid().ToString();
                Page.ClientScript.RegisterHiddenField("hdnPageId", pageId);
            }
        }

        protected override object LoadPageStateFromPersistenceMedium()
        {
            if (Page.Session != null)
            {
                if (Page.IsPostBack)
                {
                    string filePath = Session[PageId].ToString();

                    if (!string.IsNullOrEmpty(filePath))
                    {
                        if (!File.Exists(filePath))
                        {
                            return null;
                        }
                        else
                        {
                            StreamReader sr = File.OpenText(filePath);
                            string viewStateString = sr.ReadToEnd();
                            sr.Close();

                            try
                            {
                                File.Delete(filePath);
                            }
                            catch
                            {

                            }

                            LosFormatter los = new LosFormatter();
                            return los.Deserialize(viewStateString);
                        }
                    }
                    else
                    {
                        return null;
                    }
                }
                else
                {
                    return null;
                }
            }
            else
            {
                return null;
            }
        }

        protected override void SavePageStateToPersistenceMedium(object state)
        {
            if (state != null)
            {
                if (Page.Session != null)
                {
                    if (!Directory.Exists(StatePreserveFilesFolderPath))
                    {
                        Directory.CreateDirectory(StatePreserveFilesFolderPath);
                    }

                    string fileName = Session.SessionID.ToString() + "-" + DateTime.Now.Ticks.ToString() + ".vs";
                    string filePath = Path.Combine(StatePreserveFilesFolderPath, fileName);

                    Session[PageId] = filePath;

                    LosFormatter los = new LosFormatter();
                    StringWriter sw = new StringWriter();
                    los.Serialize(sw, state);

                    StreamWriter w = File.CreateText(filePath);
                    w.Write(sw.ToString());
                    w.Close();
                    sw.Close();
                }
            }
        }
    }
}

And now any page you create in the system should inherit from the "InFileStatePreservePage" class and in the "Page_Load" event call the base first as in the code below.
public partial class MyPage : InFileStatePreservePage
{
    protected void Page_Load(object sender, EventArgs e)
    {
        base.Page_Load(sender, e);
    }
}

Deleting the abandoned state files
To delete the remaining state files, you need to make sure that the files you are going to delete are the outdated files only. To do that you should delete only the files that are not modified for a period greater than the session timeout period. So, to do that you can add the code below to your Global.asax file.
void Application_Start(object sender, EventArgs e) 
{
 string stateFilesDirectory = System.IO.Path.Combine(Server.MapPath("~"), DevelopmentSimplyPut.Constants.StatePreserveFilesFolderName);
 Application["stateFilesDirectory"] = stateFilesDirectory;
 
 if (!string.IsNullOrEmpty(stateFilesDirectory) && System.IO.Directory.Exists(stateFilesDirectory))
 {
  string[] files = System.IO.Directory.GetFiles(stateFilesDirectory);
  foreach (string file in files)
  {
   System.IO.FileInfo fi = new System.IO.FileInfo(file);
   fi.Delete();
  }
 }
}

void Application_End(object sender, EventArgs e) 
{
 string stateFilesDirectory = Application["stateFilesDirectory"].ToString();

 if (!string.IsNullOrEmpty(stateFilesDirectory) && System.IO.Directory.Exists(stateFilesDirectory))
 {
  string[] files = System.IO.Directory.GetFiles(stateFilesDirectory);
  foreach (string file in files)
  {
   System.IO.FileInfo fi = new System.IO.FileInfo(file);
   fi.Delete();
  }
 }
}

void Session_Start(object sender, EventArgs e) 
{
 string stateFilesDirectory = Application["stateFilesDirectory"].ToString();

 if (!string.IsNullOrEmpty(stateFilesDirectory) && System.IO.Directory.Exists(stateFilesDirectory))
 {
  string[] files = System.IO.Directory.GetFiles(stateFilesDirectory);
  int timeoutInMinutes = Session.Timeout;
  int bufferMinutes = 5;
  foreach (string file in files)
  {
   System.IO.FileInfo fi = new System.IO.FileInfo(file);
   if (fi.LastAccessTime < DateTime.Now.AddMinutes((-1 * (timeoutInMinutes + bufferMinutes))))
   {
    fi.Delete();
   }
  }
 }
}

void Session_End(object sender, EventArgs e)
{
 string stateFilesDirectory = Application["stateFilesDirectory"].ToString();
 
 if (!string.IsNullOrEmpty(stateFilesDirectory) && System.IO.Directory.Exists(stateFilesDirectory))
 {
  string[] files = System.IO.Directory.GetFiles(stateFilesDirectory);
  int timeoutInMinutes = Session.Timeout;
  int bufferMinutes = 5;
  foreach (string file in files)
  {
   System.IO.FileInfo fi = new System.IO.FileInfo(file);
   if (fi.LastAccessTime < DateTime.Now.AddMinutes((-1 * (timeoutInMinutes + bufferMinutes))))
   {
    fi.Delete();
   }
  }
 }
}

Important prerequisite
For this solution to work well you need to set your application session timeout mode to Inproc. Otherwise, the session "Session_Start" and "Session_End" events will not fire and in this case the only time you will be clearing the abandoned state files will be at "Application_Start" and "Application_End" events which is a way too late and may cause the server to have low disk space. So, to do so you have to set your application web.config file as below.
<configuration>
    <system.web>
     <sessionState cookieless="UseCookies" mode="InProc" timeout="60"/>
    </system.web>
</configuration>


That's it, hope you can find this helpful someday. For further reading you can check the resources below.
Good luck.


References
  1. TRULY Understanding ViewState - Infinities Loop 
  2. ViewState in SQL
  3. ViewState Compression - CodeProject
  4. Understanding ASP.NET View State
  5. Flesk.NET Components - Viewstate Optimizer
  6. ViewState: Various ways to reduce performance overhead - CodeProject
  7. Keep ASP.NET ViewState out of ASPX Page for Performance Improvement - CodeProject
  8. Control State vs. View State Example

2013-12-22

How To Avoid Problems Caused By Clients' Browser Cached Resource Files (JS, CSS, ....) With Every New Build


Javascript & Css

Browsers like IE, Firefox, Chrome and others have their own way to decide if a file should be cached or not. If a file link (URL) is requested more than a certain number of times the browser decides to cache this file to avoid repeated requests and their relevant responses. So, after a file is cached by the browser and a new request is performed for this file, the browser responses with the cached version of the file instead of retrieving the file for the server.

But how does the browser know if we are requesting the same file? The browser knows that by comparing the URL of the requested file to the URL of the file it had already cached before. This means that any slight change on the requested file URL will be recognized by the browser as a completely new file.

So, what is the problem?
The problem is that sometimes between builds there are some changes applied on the application javascript and style files. Although the new code is sent to the client, they get javascript errors and some of the styles are messy. Why? this is due to the client's browser caching of the javascript and styles files. Although we replaced the old files with the new ones but the client's browser is still using the old cached ones because the URLs are still the same.

What is the solution?
There are many approaches to take to fix this issue but they are not all proper ones. Let's check some of these solutions.

Some of the solutions are:
  1. Ask the client to ask all of his system users to clear the browser cache
  2. Ask the client to ask all of his system users to disable browser caching
  3. For each build rename JS and CSS file names
  4. For each build add a dummy query string to all resources URLs
Now, let's see. I think we will all agree that the first two options are not practical at all. For the third option, it will work for sure but this is not acceptable as changing the files names will require changing all references to these files in all application pages and code which is dangerous and not acceptable by any means in terms of code maintainability.

This leaves us with the fourth option which seems like the third one but believe me they are not completely the same. For sure I don't mean to do it in a manual form like browsing through the whole code and changing the extra dummy query string for all resources URLs, there is a more generic and respectable way to do it without even caring about re-visiting the URLs for each new build.

The solution is to implement a server control to be used to register the resources instead of using the regular script and link tags. This control will be responsible for generating the URLs with the dummy query strings and making sure these query strings are not changed unless a new build is deployed.

Now, let's see some code.

Server Control:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Globalization;
using System.Web.UI.WebControls;

namespace DevelopmentSimplyPut.CustomWebControls
{
    public class VersionedResourceRegisterer : WebControl, INamingContainer
    {
        JsTags js;
        [PersistenceMode(PersistenceMode.InnerProperty)]
        public JsTags JS
        {
            get
            {
                return js;
            }
        }

        CssTags css;
        [PersistenceMode(PersistenceMode.InnerProperty)]
        public CssTags CSS
        {
            get
            {
                return css;
            }
        }

        public VersionedResourceRegisterer()
        {
            js = new JsTags();
            css = new CssTags();
        }

        protected override void Render(System.Web.UI.HtmlTextWriter output)
        {
            string fullTag = "";
            string version = AppConstants.Version;

            if (null != JS && JS.Count > 0)
            {
                foreach (Tag js in JS)
                {
                    string path = js.path;
                    path = GetAbsolutePath(path);

                    if (!string.IsNullOrEmpty(path))
                    {
                        fullTag += string.Format(CultureInfo.InvariantCulture, "<script src=\"{0}?v={1}\" type=\"text/javascript\"></script>", path, version); 
                    }
                }
            }

            if (null != CSS && CSS.Count > 0)
            {
                foreach (Tag css in CSS)
                {
                    string path = css.path;
                    path = GetAbsolutePath(path);

                    if (!string.IsNullOrEmpty(path))
                    {
                        fullTag += string.Format(CultureInfo.InvariantCulture, "<link href=\"{0}?v={1}\" type=\"text/css\" rel=\"stylesheet\" />", path, version);
                    }
                }
            }

            output.Write(fullTag);
        }

        private string GetAbsolutePath(string path)
        {
            string result = path;

            if(!string.IsNullOrEmpty(path))
            {
                if (!path.Contains("://"))
                {
                    if (path.StartsWith("~"))
                    {
                        HttpRequest req = HttpContext.Current.Request;
                        string applicationPath = req.Url.Scheme + "://" + req.Url.Authority + req.ApplicationPath;

                        if(!applicationPath.EndsWith("/"))
                        {
                            applicationPath += "/";
                        }

                        path = path.Replace("~", "").Replace("//", "/");

                        if (path.StartsWith("/"))
                        {
                            if (path.Length > 1)
                            {
                                path = path.Substring(1, path.Length - 1);
                            }
                            else
                            {
                                path = "";
                            }
                        }

                        result = applicationPath + path;
                    }
                }
            }

            return result;
        }
    }

    public class Tag
    {
        public string path { set; get; }
    }

    public class JsTags : List<Tag>
    {
    }

    public class CssTags : List<Tag>
    {
    }
}

Version Generation:
public static class AppConstants
{
    private static string version;
    public static string Version
    {
        get
        {
            return version;
        }
    }

    static AppConstants()
    {
        version = (Guid.NewGuid()).ToString().HtmlEncode();
    }
}
As you see the AppConstants class is a static class and inside its static constructor the version is generated once. This means that with each IIS reset a new version will be generated and accordingly with each build we get a new version.

Using Control On Pages:
<ucVersionedResourceRegisterer:VersionedResourceRegisterer runat="server">
 <JS>
  <ucVersionedResourceRegisterer:Tag path="Scripts/jquery-1.10.2.min.js" />
  <ucVersionedResourceRegisterer:Tag path="Scripts/jquery-migrate-1.2.1.min.js" />
  <ucVersionedResourceRegisterer:Tag path="Scripts/jquery.alerts.min.js" />
 </JS>
 <CSS>
  <ucVersionedResourceRegisterer:Tag path="Styles/jquery.alerts.css" />
 </CSS>
</ucVersionedResourceRegisterer:VersionedResourceRegisterer>

Finally, this is not the only advantage of using the server control as you can always use it to gain more control on your resource files. One of the tasks in which I made use of this control is applying automatic minification and bundling of my resource files to enhance my application performance.

That's it. Hope you find this post helpful someday.
Good luck.


2013-12-21

Application To Generate Combined Images Of All Image-Categories Possible Combinations

[Update] The application is now updated to avoid "out of memory" exceptions.


On my last project I was working on a tree control which represents system business objects into hierarchical form. Each object has a workflow to go through and for each step in this workflow the object state changes.

So, one of the requirements was to add images beside each tree node for the system user to know the status of each tree node easily with the need to open a properties window or something like that.

This is easy but we have some points to clarify first:
  • Each system objects has three categories of status
    1. Cat01: is so business related so I will not go into details about it but let's just say that this type of object status should have one of 25 possibilities
    2. Cat02: is related to the type of change applied on the system object (added, edited, deleted, unchanged)
    3. Cat03: is also so business related but it is mainly about object change approval status. This object status should have one of 7 possibilities
  • For each image tag added on the page there will be a request to the server and for sure its related response

So, after some thinking we came up with 2 approaches to choose from:
  1. Add three image tags beside each tree node and fill these tags with appropriate images according to the status of each object
  2. Add one image tag beside each tree node and fill this tag with only one image which is all the object 3 status images combined into one image. The specific image name will be built by concatenating the keywords for each category status. This way we don't have to switch case or detect each combination from real object status, just concatenate the keywords and voila we have the right image name

After doing the math, and it was so simple, we decided to go with the second approach to eliminate the extra requests the would be sent for each tree node to get all the three status images, so instead of 3 requests it would be only 1.

So, now comes the hard part. We need to generate the combined images. We could have wrote some code to generate the combined image of each tree node on the fly at run-time but we thought that this would not be the best decision as the status images are some static images which we already have at design time. So, there was no reason to go with that much run-time processing for each node when we could already generate all the possible images combinations we need.

That's why we decided to generate all the possible combinations that we could have for each system object status. Someone said that this would be too much images but we replied that it is not a problem as these images will be static and at the end of the days we are talking about a few mega bytes of static hard desk space not RAM.

We thought that doing the combination thing manually would be a shame and actually impossible as we have 25 x 4 x 7 = 700 combinations which means 700 images. Doing all of this work manually is so bad by all means especially when at some point the client decides to replace an image with another one or add a new image.

That's why I wrote a simple windows application which does all the hard work. You just give it each category images and in just seconds you get all your images.

Hint: the code related to generating all the possible combinations is built using the "Possibilities Cube" library I had posted before. If you are interested to read about it you can find it on Possibilities Cube Library - A Library Smart Enough To Calculate All Possibilities With Logical Conditions

Now, let's see some screenshots for the windows application.


Images for the first category (Cat01)

Images for the second category (Cat02)

Images for the third category (Cat03)

Adding images for each category on the application

Finally, the generated images

As you can see it is so easy to use the application and you can apply any modifications on the code to go with your specific business needs. Currently the code is set to export images into png format but you can change it as you wish in the code.

I didn't put much care into the application UI, graphics and so on as it is only for indoor usage to serve a certain need not to be a standalone product or anything like that so don't be turned off by the UI as at the end of the day it may help you save much time and effort.

Finally, you can download the application from here


Wish you will find this useful.
Good luck.