Pipes and Filters Design Pattern: A Practical Approach

When it comes to data processing, one of the common challenges developers face is how to write maintainable and reusable code. The pipes and filters design pattern is a powerful tool that can help you solve this problem by breaking down a complex data processing task into a series of simple, reusable filters that can be combined in different ways to achieve different results.

In this blog post, we will take a look at how to use the pipes and filters pattern to validate incoming data using C#, We will start by defining the basic building blocks of the pattern, then we will implement a simple example that demonstrates how the pattern works in practice.

Building Blocks

The pipes and filters pattern consists of three main components: filters, pipes, and the pipeline.

  • A filter is simple piece of code that performs a specific task on the input data and returns the result. In our example, we will have two filters: a validation filter that checks if the input data is valid and a transformation filter that converts the input data to uppercase.
  • A pipe is a data structure that connects the output of one filter to the input of another. In our example, we will not use pipes explicitly, but the pipeline will be responsible for connecting the filters together.
  • The pipeline is the main component of the pattern that holds all the filters and connects them together. It is responsible for applying the filters to the input data in the correct order and returning the final result.

Implementing the Pipes and Filters Pattern in C#

Now that we have a basic understanding of the components of the pipes and filters pattern, let’s take a look at how we can implement in C#.

First, we will define and interface for filters called IPipeFilter<T> that has a single method called Process that takes in a input of type T and returns output of type T.

interface IPipeFilter<T>
{
    T Process(T input);
}

Next we will create two filters that implement this interface. The first one is DataValidationFilter that checks if the input data is valid and throws an exception if it is not.

class DataValidationFilter : IPipeFilter<string>
{
    public string Process(string input)
    {
        if (string.IsNullOrWhiteSpace(input))
            throw new Exception("Invalid input data");

        return input;
    }
}

The second filter is DataTransformationFilter that converts the input data to uppercase.

class DataTransformationFilter : IPipeFilter<string>
{
    public string Process(string input)
    {
        return input.ToUpper();
    }
}

Finally, we will create a class called DataProcessingPipeline that takes a list of IPipeFilter<T> as a constructor argument, and it applies each filter in the list to the input data in the order they are provided.

class DataProcessingPipeline<T>: IPipeLine<T> {
  private readonly List <IPipeFilter<T>> _filters;

  public DataProcessingPipeline(List <IPipeFilter<T>> filters) {
    _filters = filters;
  }

  public T Process(T input) {
    foreach(var filter in _filters) {
      input = filter.Process(input);
    }
    return input;
  }
}

with the above classes we are ready to implement the pipeline and use it to validate and transform incoming data.

class Program
{
    static void Main(string[] args)
    {
        var pipeline = new DataProcessingPipeline<string>(new List<IPipeFilter<string>>
        {
            new DataValidationFilter(),
            new DataTransformationFilter()
        });

        try
        {
            var processedData = pipeline.Process("valid input data");
            Console.WriteLine(processedData);
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
    }
}

In this example, we first create an instance of DataProcessingPipeline<string> with a list of filters that contains DataValidationFilter and DataTransformationFilter, Then we apply the pipeline to the input data “valid input data”, the output of this pipeline will be “VALID INPUT DATA”.

Conclusion

The pipes and filters pattern is a powerful tool for breaking down complex data processing tasks into simple, reusable components. It can help you write maintainable and reusable code that is easy to understand and modify. In this blog post, we have seen how to use the pipes and filters pattern to validate incoming data using C#, but this pattern can be used in many other scenarios as well. I hope this example will give you a good starting point for using this pattern in your own projects.

Dynamic objects in C#

Dynamic objects in C# are objects that can have properties and methods added to them at runtime, as opposed to traditional objects which have a fixed set of properties and methods defined at compile time. There are a few different ways to create dynamic objects in C#, including the dynamic keyword and the ExpandoObject class.

Here’s and example of using the dynamic keyword to create a dynamic object:

dynamic obj = new ExpandoObject();
obj.Name = "John Smith";
obj.Age = 30;

console.WriteLine($"Name: {obj.Name}, Age: {obj.Age}");

In the example above, we create a dynamic object using the ExpandoObject class, which is part of the System.Dynamic namespace. We then add a Name and Age property to the object, and print them out.

Here’s an example of using the ExpandoObject class to create a more powerful and flexible dynamic object:

dynamic obj = new MyDynamicObject();
obj.Name = "John Smith";
obj.Age = 30;

console.WriteLine($"Name: {obj.Name}, Age: {obj.Age}");

public class MyDynamicObject : DynamicObject
{
    private Dictionary<string, object> _properties = new Dictionary<string, object>();

    public override bool TrySetMember(SetMemberBinder binder, object value)
    {
        _properties[binder.Name] = value;
        return true;
    }

    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        return _properties.TryGetValue(binder.Name, out result);
    }
}

In the example above, we define a MyDynamicObject class that inherits from DynamicObject, and overrides the TrySetMember and TryGetMember methods. These methods allow us to add and retrieve properties from the object at runtime.

Dynamic objects can be useful in a variety of situations, such as when you need to work with data that has a flexible schema, or when you want to add properties and methods to an object at runtime based on user input. However, it’s important to keep in mind that using dynamic objects can make your code less predictable and more difficult to debug, so it’s usually best to use them sparingly. The ExpandoObject is a more powerful and flexible option than the dynamic keyword, but it may be more complex to use in some cases.

dynamic vs expando

  • dynamic is a type, whereas ExpandoObject is a class: The dynamic keyword is a type that can be used to declare a variable. When you declare a variable as dynamic, you can assign any value to it at runtime. The ExpandoObject class, on the other hand, is a class that you can use to create a dynamic object
  • dynamic objects are resolved at runtime, whereas ExpandoObject objects are resolved at runtime and compile time: When you use the dynamic keyword, the compiler does not check the type of the object at compile time. Instead, the type is resolved at runtime, which can make it more difficult to catch errors. The ExpandoObject, on the other hand, is resolved at both runtime and compile time, which means that the compiler can catch some errors that might occur when using the object.
  • dynamic objects have limited functionality, whereas ExpandoObject objects have more functionality: The dynamic keyword is limited to the functionality provided by the .NET runtime. This means that you cannot add your own methods or properties to a dynamic object. The ExpandoObject, on the other hand, is part of the System.Dynamic namespace, which provides a number of classes and methods for working with dynamic objects. This means that you can use the ExpandoObject to create more powerful and flexible dynamic objects.

Conclusion

In general, the ExpandoObject is a more powerful and flexible option than the dynamic keyword, since it provides more functionality and is resolved at both runtime and compile time. However, the dynamic keyword can be a simpler and more lightweight option in some cases, particularly when you don’t need the additional functionality provided by the ExpandoObject.

Asynchronous programming in C#

Asynchronous programming is a powerful tool for building responsive, scalable applications in C#. By using asynchronous techniques, you can avoid blocking the main thread of your application and ensure that your code runs efficiently and smoothly, even in the face of long-running or resource-intensive tasks.

In this blog post, we’ll take a closer look at asynchronous programming in C# and see how you can use it to build high-performance applications.

Using `async` and `await`

One of the most commonly used features of asynchronous programming in C# is the async and await keywords. These keywords allow you to write asynchronous code in a way that looks and feels like synchronous code, making it easier to read and understand.

Here’s an example of how you might use the async and await keywords to asynchronously retrieve data from a web service:

private async Task<string> GetDataAsync()
{
    using (var client = new HttpClient())
    {
        var response = await client.GetAsync("https://example.com/data");
        return await response.Content.ReadAsStringAsync();
    }
}

In this example, the GetDataAsync method is marked with the async keyword, which indicates that it contains asynchronous code. The await keyword is then used to asynchronously wait for the response from the web service, and the result is returned as a string.

Task-Based Asynchronous Pattern

In addition to the async and await keywords, C# also provides the Task-Based Asynchronous Pattern (TAP) for building asynchronous applications. TAP is a design pattern that uses tasks to represent the asynchronous operation, allowing you to write asynchronous code in a more flexible and powerful way.

Here’s an example of how you might use TAP to asynchronously retrieve data from a web service:

private Task<string> GetDataAsync()
{
    return Task.Factory.StartNew(() =>
    {
        using (var client = new HttpClient())
        {
            var response = client.GetAsync("https://example.com/data").Result;
            return response.Content.ReadAsStringAsync().Result;
        }
    });
}

In this example, the GetDataAsync method returns a Task<string> that represents the asynchronous operation. The task is created using the Task.Factory.StartNew method, which runs the specified delegate asynchronously on a separate thread. The delegate then retrieves the data from the web service and returns the result as a string.

In conclusion, asynchronous programming is a powerful tool for building responsive, scalable applications in C#. By using techniques like the async and await keywords or the Task-Based Asynchronous Pattern (TAP), you can avoid blocking the main thread of your application and ensure that your code runs efficiently and smoothly, even in the face of long-running or resource-intensive tasks. Whether you’re building a simple console application or a complex web application, asynchronous programming can help you deliver high-performance, reliable code.

How to do polymorphic serialization/deserialization in C# System.Text.Json

Let’s say you have a hierarchy of models that you need to serialize and store it as a JSON string. The straight forward way to do this would be to use the JsonSerializer class. But if one of the child property is polymorphic its becomes little tricky. Let’s look at the following example

public class ObjectMetadata
{
  public string ObjectName { get; set; }
  public List<BaseType> FieldMetadata {get; set;}
}

public class BaseFieldMetadata
{
  public string FieldName {get; set;}
  public string Description {get; set;}
}

public class StringFieldMetadata
{
  public int Length {get; set;}
}

public class IntFieldMetadata
{
  public int Min {get; set;}
  public int Max {get; set;}
}

In this example we have parent object ObjectMetadata and it has one property named FieldMetadata. This field metadata can be of any type depending on the scenario(ex: int, decimal, date, currency, string etc). And we will have different implementations for BaseFieldMetadata to handle different types.

Now let’s try to serialize this structure.

var objectMetadata = new ObjectMetadata()
{
	ObjectName = "Account",
	FieldMetadata = new List<BaseType>()
	{
		new StringField()
		{
			FieldName = "Name",
			Length = 90
		},
		new IntField()
		{
			FieldName = "Age",
			MinValue = 10,
			MaxValue = 100
		}
	}
};
var jsonString = JsonSerializer.Serialize(@object);
Console.WriteLine(jsonString);

This will print the following

{
    "ObjectName": "Account",
    "FieldMetadata": [{
            "FieldName": "Name",
            "Description": null
        }, {
            "FieldName": "Age",
            "Description": null
        }
    ]
}

Serialization

You notice that it has ignored the properties of StringFieldMetadata and IntFieldMetadata. This is because the JsonSerializer only looks at the type of the declared property and tried to serialize the BaseType properties. If we want to make the serializer to look at the derived type we could do something like below.

var jsonString = JsonSerializer.Serialize((object[])objectMetadata
.FieldMetadata
.ToArray());
Console.WriteLine(jsonString);

The above code will internally call GetType() method and figure out the exact type and will serialize the child properties as well.

[{
        "Length": 90,
        "FieldName": "Name",
        "Description": null,
    }, {
        "MinValue": 10,
        "MaxValue": 100,
        "FieldName": "Age",
        "Description": null
    }
]

Now that we are properly able to serialize the model, if you try to deserialize the data you wont be able to. This is because once you have converted the model to json string we would have lost all the type information. And when try to deserialize the json string to model it will use the declared type BaseType and ignore the properties that belongs to the derived type.

Deserialization

The only way that we can deserialize the JSON string to specific type polymorphically we have to store some metadata on the JSON which we can utilize during deserialization.

public interface IFieldType
{
  string FieldType { get; }
}

Let’s use the above interface and implement this interface in our BaseType so that all derived class of the BaseType will have this property. Also assign this property with the name of the class like below

public class ObjectMetadata
{
	public string ObjectName { get; set; }
	public List<BaseType> FieldMetadata { get; set; }
}

public class BaseType : IFieldType
{
	public string FieldName { get; set; }
	public string Description { get; set; }
	public string FieldType => nameof(BaseType);
}

public class StringField : BaseType
{
	public int Length { get; set; }
	public new string FieldType => nameof(StringField);
}

public class IntField : BaseType
{
	public int MinValue { get; set; }
	public int MaxValue { get; set; }
	public new string FieldType => nameof(IntField);
}

This will make sure that when we serialize the model we will have the type information as part of the json String like below

{
    "ObjectName": "Account",
    "FieldMetadata": [{
            "Length": 90,
            "FieldType": "StringField",
            "FieldName": "Name",
            "Description": null
        }, {
            "MinValue": 10,
            "MaxValue": 100,
            "FieldType": "IntField",
            "FieldName": "Age",
            "Description": null
        }
    ]
}

Now we can use this property and write a custom deserialization logic to determine appropriate type. We would need to create a custom JsonConverter class to handle this.

public class FieldMetadataConverter<T> : JsonConverter<T> where T : IFieldType
{
	private readonly IEnumerable<Type> _types;

	public FieldMetadataConverter()
	{
		var type = typeof(T);
		_types = AppDomain.CurrentDomain.GetAssemblies()
		.SelectMany(s => s.GetTypes())
		.Where(p => type.IsAssignableFrom(p) && p.IsClass && !p.IsAbstract);
	}
	public override T Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options)
	{
		if (reader.TokenType != JsonTokenType.StartObject)
			throw new JsonException();

		using (var jsonDocument = JsonDocument.ParseValue(ref reader))
		{
			if (!jsonDocument.RootElement.TryGetProperty(nameof(IFieldType.FieldType), out var typeProperty))
				throw new JsonException();
			var type = _types.FirstOrDefault(x => x.Name == typeProperty.GetString());
			if (type == null)
				throw new JsonException();
			var jsonString = jsonDocument.RootElement.GetRawText();
			var jsonObject = (T)JsonSerializer.Deserialize(jsonString, type, options);
			return jsonObject;
		}
	}

	public override void Write(Utf8JsonWriter writer, T value, JsonSerializerOptions options)
	{
		JsonSerializer.Serialize(writer, (object)value, options);
	}
}

We have to override the Read() and Write() method of JsonConvertor. As we have seen while writing we just have to cast the property to object and while reading we will check the FieldType property and get the appropriate type and cast to that type.

Hope you find this useful!

Reference

https://docs.microsoft.com/en-us/dotnet/standard/serialization/system-text-json-polymorphism

Build Blazor apps with Tailwind CSS

Blazor lets you build web UI components with C# dotnet. These components can be used in browser via web assembly or server side via ASP.NET Core.

Tailwind CSS is utility-based CSS framework. Unlike bootstrap Tailwind doesn’t come with its own components, it has thousands of micro-CSS classes which can be used to build our UI components based on our design needs.

Let’s see how to get started with Tailwind CSS in blazor applications

Create & Clean

First create a blazor application using the following command

dotnet new blazorwasm

After creating the blazor application remove the bootstrap bits by following below steps

  • In the source code under wwwroot/css/ delete bootstrap and open-iconic folders
  • Open app.css and remove all css upto #blazor-error-ui from the beginning
  • Open index.html and remove the reference to the bootstrap <link href="css/bootstrap/bootstrap.min.css" rel="stylesheet" />

Setup

  • In the root folder of you Blazor application, run npm init --yes to initialize a package.json.
  • Next run npm install -D tailwindcss postcss-import.
  • Next run npx tailwindcss init --postcss which will create tailwind.config.js and postcss.config.js files

Once the files are created update the tailwind.config.js file to watch certain files and ignore some folders.

module.exports = {
  content: [
    '!**/{bin,obj,node_modules}/**',
    '**/*.{razor,html}',
  ],
  theme: {
    extend: {},
  },
  plugins: [],
}

Update the postcss.config.js file to below. This is required to @import any CSS in {proj_name}.styles.css

module.exports = {
  plugins: {
    'postcss-import': {},
    tailwindcss: {}
  }
};

Create a new css files in root as site.css and the following code

@import "tailwindcss/base";
@import "tailwindcss/components";
@import "tailwindcss/utilities";

Build & Watch

In the package.json enter the following code for scripts property

  "scripts": {
    "build": "npx tailwindcss --config tailwind.config.js --postcss postcss.config.js -i site.css -o ./wwwroot/site.min.css",
    "watch": "npx tailwindcss --config tailwind.config.js --postcss postcss.config.js -i site.css -o ./wwwroot/site.min.css --watch",
    "publish": "npx tailwindcss --config tailwind.config.js --postcss postcss.config.js -i site.css -o ./wwwroot/site.min.css --minify"
  }

Now if you try running npm build you could see the site.min.css getting created.

Now refer to the site.min.css in your index.html.

Open site.css in your root folder and add the code below.

@import "tailwindcss/base";
@import "tailwindcss/components";
@import "tailwindcss/utilities";

@layer base{
    .btn {
        @apply font-bold py-2 px-4 rounded;
      }
      .btn-blue {
        @apply bg-blue-500 text-white;
      }
      .btn-blue:hover {
        @apply bg-blue-700;
      }
}

Once the above is done open your Index.razor file and update the code as below.

@page "/"

<PageTitle>Index</PageTitle>

<button class="btn btn-blue">
  Button
</button>

Now run the npm build command and then run dotnet watch and you can see that the button is now styled using tailwind css

Now that you have done the setup if you want to automate the build checkout this link.

Reference

https://www.tailblazor.dev/

How to use Redis as your primary database

Recently, I got chance to work with Redis and realized that redis is not just a cache solution it can serve as your primary database. Traditional databases store their data on disk, even though most databases have an embedded cache on RAM to optimize query performances. Most of the time we end up using some caching solution like in memory or Redis to get sub millisecond performances.

It’s easy to conceptualize your tables as redis data structures. For example Hash can serve as your table, Sorted Set can be used to build secondary indexes. Let’s see some of the basic database operations in the context of redis for storing and querying list of employees

Inserting data

You can use Hashmaps to store each record of your table. Each hashmaps will need to be suffixed with an identifier like employees::1

HSET employees::1 name Arivu salary 100000 age 30
ZADD employees::name Arivu:1
ZADD employees::salary 100000:1

HSET employees::2 name Uma salary 300000 age 31
ZADD employees::name Uma:2
ZADD employees::salary 300000:2

HSET employees::3 name Jane salary 100000 age 25
ZADD employees::name Jane:3
ZADD employees::salary 100000:3

HSET employees::4 name Zakir salary 150000 age 28
ZADD employees::name Zakir:4
ZADD employees::salary 150000:4

The above commands will also work for updating the data. It basically creates employees with 4 records while also updating the respective indexes. In the above example we are indexing only two fields. Unlike traditional database in Redis we have to take care of keeping the indexes up to date.

Querying data

If you want to query by the primary key

HGETALL employees::1

If you want to query by secondary indexes. For example lets query by salary > 150000

ZRANGEBYSCORE employees::salary 150000
Output
======
1) "1"
2) "3"
3) "4"

Now you can do a HGETALL for all these ids.

If you want to query using some advanced queries with AND OR logic. I suggest you to explore ZINTERSTORE/ZUNIONSTORE in redis.

Sorting data

Once you know the relevant hashmaps that needs to be return to the client, You can use the SORT function to sort the employees data based on some field

Result returned after querying

1) "1"
2) "3"
3) "4"

Store the results in a SET

SADD result 1 2 3 4

Sort the data

SORT result by employees::*->name ALPHA GET employees::*->name
Output
1) "Arivu"
2) "Jane"
3) "Uma"
4) "Zakir"

Conclusion

As you can see that redis is definitely capable of serving as a primary database and also with Amazon and AWS offers Managed redis instances its even easier to use Redis as your primary datastore.

Hope you find this useful.

Understanding SOLID Principles: Interface Segregation & Dependency Injection

Interface Segregation

The interface is one of the key concept in object oriented programming. Interfaces represent boundaries between what client code requires and how that requirement is implemented. The interface segregation principle states that interfaces should be small.

Because whenever we create an interface every single properties, events and methods needs to be implemented as its entirety. So if we have large interfaces it does not makes sense to expect clients to implement all members irrespective of what the client actually needs

Consider you are creating a streaming application user interface,

public interface IStreamingUser {
   Stream Play(String videoId);
   Stream Download(String videoId);
}

There are different kind of streaming users,

  • RegularUser can only see video
  • Premium user can see as well as download the video
public class RegularUser : IStreamingUser {
  Stream Play(String videoId) {
     // actual code to stream video
  }
  Stream Download(String videoId) {
    throw new NotImplementedException("");
  }
}

public class PremiumUser : IStreamingUser {
  Stream Play(String videoId) {
     // actual code to stream video
  }
  Stream Download(String videoId) {
    // actual code to stream video
  }
}

As you can now see from the above example since the interface is not segregated properly we had to force RegularUser class to implement Download method.

Fixing using interface segregation

interface IPlayable {
 void Play(String id);
}

interface IDownlodable {
 void Download(String id);
}

class RegularUser : IPlayable {
  void Play(String id) {
     //code
  }
}
class PremiumUser : IPlayable,IDownloadable {
  void Play(String id) {
     //code
  }
  void Download(String id) {
     //code
  }
}

Too often, interfaces are large facades behind which huge subystems are hidden, At a certain critical mass, interfaces lose the adaptability that makes them so fundamental to developing solid code

Dependency Injection

Dependency injection (DI) is a very simple concept with a similarly simple implementation However this simplicity belies the importance of the pattern. DI is the glue which ties all the SOLID principles together.

Let’s take a look at below controller class

public class TaskListController 
{
    private readonly ITaskService taskService;
    private readonly IObjectMapper mapper;
    public TaskListController()
    {
         this.taskService = new TaskService();
         this.mapper = new AutoMapper();
    }

}

This problems with above code

  • Not unit testable
  • Hard dependency to taskService and mapper
  • Lack of flexibility in providing alternative service implementations.

Improved design

public class TaskListController 
{
    private readonly ITaskService taskService;
    private readonly IObjectMapper mapper;
    public TaskListController(ITaskService taskService,IObjectMapper mapper)
    {
         this.taskService = taskService;
         this.mapper = mapper;
    }

}

Now you can see we have removed all hard dependencies. Now we can use any tools like DI framework and define the object graph well in advance. Also when unit testing we can mock these constructor arguments.

Conclusion

Hope you guys found this series on SOLID principles useful.

Understanding SOLID Principles: Liskov Substitution

This post is continuation of the SOLID principles series that I have been writing about on SOLID Principles.

Please make sure you read my other blogs on the topic

The Liskov substitution principle (LSP) is a set of rules for creating inheritance hierarchies in which the consumers of these classes can reliably use any class or subclass without breaking their code.

If S is subtype of T, then objects of type T may be replaced with objects of type S, without breaking the program — Barbara Liskov

Let’s look at some practical examples to understand this further. Consider we have a base class called ShippingStrategy and it is being inherited by WorldWideShippingStrategy and it contains the following method

decimal CalculateShippingCost(
float packageWeightInKilograms,
Size<float> packageDimensionsInInches,
RegionInfo destination)

The only thing that this method shares between ShippingStrategy and WorldWideShippingStrategy is its signature. The implementation could be completely different between these two classes.

Let’s see the base class(ShippingStrategy) implementation

public decimal CalculateShippingCost(
float packageWeightInKilograms,
Size<float> packageDimensionsInInches,
RegionInfo destination)
{
if(packageWeightInKilograms <= 0f)
throw new Exception($"{nameof(packageWeightInKilograms)} must be positive and non zero");
     // Actual logic

return default(decimal)
}

As you can see from the above code example the function has a precondition to make sure the packageWeightInKilograms is always positive and non zero. Preconditions are defined as all the conditions necessary for a method to run reliably and without fault.

Now let’s consider the WorldWideShippingStrategy implementation

public decimal CalculateShippingCost(
float packageWeightInKilograms,
Size<float> packageDimensionsInInches,
RegionInfo destination)
{
if(packageWeightInKilograms <= 0f)
throw new Exception($"{nameof(packageWeightInKilograms)} must be positive and non zero");

if(destination == null)
throw new Exception($"{nameof(destination)}, Destination must be provided")
    // Actual logic

return default(decimal)
}

Now since we have added an additional precondition in this implementation if any consumer using the base type might assume that they can pass null to the destination parameter and if they try to use the WorldWideShippingStrategytheir program would break.

This is the actual problem that LSP principle is trying to address using the following rules

Contract Rules

  • Preconditions cannot be strengthened in a subtype
  • Postconditions cannot be weakened in a subtype
  • Invariants — conditions that must remain true through the lifetime of an object

Variance Rules

  • There must be contravariance of the method arguments in the subtype.
  • There must be covariance of the return types in the subtype
  • No new exceptions can be thrown by the subtype unless they are part of the existing exception hierarchy

Conclusion

Even though LSP might appear to be one of the complex principles among SOLID principles, once we understand the concepts of preconditions, postconditions and variance it becomes easier to grasp.

Hope this helps!!

Understanding SOLID Principles: Open/Closed

As beginners we would have all written code that is quite procedural, irrespective of the language we begin with. Beginners tend to use classes as storage mechanisms for methods, regardless of whether those methods truly belong together. There is no/lack of architecture to the code, and there are very few extension points. Any change in the requirement will result in modifying the existing code which could result in regression.

In our previous part we have seen Single Responsibility Principle, which talked about god object and how you should refactor it for clarity. In this post let’s see about Open/Closed principle.

The name Open/Close principle may sound like a oxymoron. But lets look at the definition from Meyer

Software entities should be open for extension, but closed for modification

Bertrand Meyer

Open for extension – This means that the behavior of the module can be extended. As the requirements of the application change, we are able to extend the module with new behaviors that satisfy those changes, In other words, we are able to change what the module does.

Close for modification – Extending the behavior of a module does not result in changes to the sources or binary code of the module. The binary executable version of the module, whether in a linkable library, a DLL, or a Java .jar, remains untouched.

Extension Points

Classes that honor the OCP should be open to extension by containing defined extension points where future functionality can hook into the existing code and provide new behaviors.

If you looked at the code sample that from the Single Responsibility Principle the snippet that you see before refactoring is an example for no extension code.

If you allow changes to existing code there is a higher chance of regression and also when you change an existing interface it will have an impact on the client.

We can provide extension code using following concepts

  • Virtual Methods
  • Abstract Methods
  • Interface

Virtual Methods

If we mark one of the member of class as virtual it becomes an extension. This type of extension is handled via inheritance. When your requirement for an existing class changes, you can just subclass the existing class and without modifying its source code you can change the behavior to satisfy new requirement

Abstract Methods

Abstract is one another OOPS concept which we can use to provide extension points. By declaring a member as abstract you are leaving the implementation details to the inheriting class. Unlike virtual here we are not overriding an existing implementation, but rather delegating the implementation to sub class.

Interface inheritance

The final type of extension point is interface inheritance. Here, the clients dependency on a class is replaced with the interface. Unlike other two methods when it comes to interface all the implementation details are client specific thus offer much more flexible.

Also this helps to keep inheritance hierarchies shallow, with few layers of subclassing.

Closed for change

Design and document for inheritance or else prohibit it

Joshua Bloch

If you are using inheritance then you must be aware that any class can be inherited and can be added with new functionality. But if we are allowing it we must have proper documentation for the class so as to protect and inform future programmers who extend the class.

If you are not expecting a class to be extended its better to restrict the extension by using the keyword sealed.

Conclusion

Knowing that you add extension point is not sufficient, however. You also need to know when this is applicable. Identify the parts of the requirement that are likely to change or that are particularly troublesome to implement. So depending on the specific scenario the code can be rigid or it can be fluid, with myriad extension points.

Reference

Adaptive Code Via C# – Gary Mclean Hall

Understanding SOLID Principles: Single Responsibility

Agile methodology is not just an alternative to more rigid process like waterfall, but a reaction to them. The aim of agile is to embrace change as the necessary part of the contract between client and developer.

If your code is not adaptive enough, Your process cannot be agile enough

UMAMAHESWARAN

When the sole purpose of agile being adaptability, As developers should strive to ensure that their code is maintainable, readable, tested and more importantly adaptive to change. SOLID is the acronym for a set of practices that, when implemented together makes the code adaptive to change.

Each of these principles is a worthy practice by itself that any software developer would do well to learn. When used in collaboration these patterns give code a completely different structure. Lets explore SRP

Single Responsibility Principle

The single responsibility principle (SRP) instructs developers to write code that has one and only one reason to change. If a class has more than one reason to change, it has more than one responsibility . Classes with more than a single responsibility should be broken down into smaller classes, each of which should have only one responsibility and reason to change.

To achieve single responsibility you have to identify classes that have too many responsibilities and use delegation and abstraction to refactor the code to achieve single responsibility.

What do I mean by one reason to change? Lets look at an example of a TradeProcessor to better explain the problem.

namespace SalesProcessor
{
	public class TradeProcessor
	{
		public void ProcessTrades(Stream stream)
		{
			// read rows
			var lines = new List<string>();
			using (var reader = new StreamReader(stream))
			{
				string line;
				while ((line = reader.ReadLine()) != null)
				{
					lines.Add(line);
				}
			}

			var trades = new List<TradeRecord>();

			var lineCount = 1;
			foreach (var fields in lines.Select(line => line.Split(new[] { ',' })))
			{
				if (fields.Length != 3)
				{
					WriteLine("WARN: Line {0} malformed. Only {1} field(s) found.", lineCount, fields.Length);
					continue;
				}

				if (fields[0].Length != 6)
				{
					WriteLine("WARN: Trade currencies on line {0} malformed: '{1}'", lineCount, fields[0]);
					continue;
				}

				if (!int.TryParse(fields[1], out var tradeAmount))
				{
					WriteLine("WARN: Trade amount on line {0} not a valid integer: '{1}'", lineCount, fields[1]);
				}

				if (!decimal.TryParse(fields[2], out var tradePrice))
				{
					WriteLine("WARN: Trade price on line {0} not a valid decimal: '{1}'", lineCount, fields[2]);
				}

				var sourceCurrencyCode = fields[0].Substring(0, 3);
				var destinationCurrencyCode = fields[0].Substring(3, 3);

				// calculate values
				var trade = new TradeRecord
				{
					SourceCurrency = sourceCurrencyCode,
					DestinationCurrency = destinationCurrencyCode,
					Lots = tradeAmount / LotSize,
					Price = tradePrice
				};

				trades.Add(trade);

				lineCount++;
			}

			using (var connection = new SqlConnection("Data Source=(local);Initial Catalog=TradeDatabase;Integrated Security=True;"))
			{
				connection.Open();
				using (var transaction = connection.BeginTransaction())
				{
					foreach (var trade in trades)
					{
						var command = connection.CreateCommand();
						command.Transaction = transaction;
						command.CommandType = System.Data.CommandType.StoredProcedure;
						command.CommandText = "dbo.insert_trade";
						command.Parameters.AddWithValue("@sourceCurrency", trade.SourceCurrency);
						command.Parameters.AddWithValue("@destinationCurrency", trade.DestinationCurrency);
						command.Parameters.AddWithValue("@lots", trade.Lots);
						command.Parameters.AddWithValue("@price", trade.Price);

						command.ExecuteNonQuery();
					}

					transaction.Commit();
				}
				connection.Close();
			}

			WriteLine("INFO: {0} trades processed", trades.Count);
		}

		private static float LotSize = 100000f;
	}
	internal class TradeRecord
	{
		internal string DestinationCurrency;
		internal float Lots;
		internal decimal Price;
		internal string SourceCurrency;
	}
}


This class is trying to achieve following

  1. It reads every line from a Stream and stores each line in a list of strings.
  2. It parses out individual fields from each line and stores them in a more structured list of Trade-Record instances.
  3. The parsing includes some validation and some logging to the console.
  4. Each TradeRecord is enumerated, and a stored procedure is called to insert the trades into a database

The responsibilities of the TradeProcessor are reading streams, parsing string, validating fields, logging and database insertion. The SRP states that this class should only have single reason to change, However the reality of the TradeProcessor is that it will change under the following circumstances

  • When you decide not to use a Stream for input but instead read the trades from a remote call to a web service.
  • When the format of the input data changes, perhaps with the addition of an extra field indicating the broker for the transaction
  • When the validation rules of the input data change
  • When the way in which you log warnings, errors and information changes. If you are using a hosted web service, writing to the console would not be a viable option.
  • When the database changes in some way — perhaps the insert_trade stored procedure requires a new parameter for the broker, too, or you decide not to store the data in a relation database and opt for document storage or the database is moved behind a web service that you must call.

For each of these changes, this class would have to be modified.

Refactoring for clarity

This class not only has too many responsibilities it has a single method that has too many responsibilities, So first you split this method into multiple methods.

public void ProcessTrades(Stream stream)
{
	var lines = ReadTradeData(stream);
	var trades = ParseTrades(lines);
	StoreTrades(trades);
}

Let’s looks at ReadTradeData,

private IEnumerable<string> ReadTradeData(Stream stream)
{
	var tradeData = new List<string>();
	using (var reader = new StreamReader(stream))
	{
		string line;
		while ((line = reader.ReadLine()) != null)
		{
			tradeData.Add(line);
		}
	}
	return tradeData;
}

This is exactly the same code that we have in the original code, but it simply been encapsulated in a method which returns list of string.

Let’s look at ParseTrades method

This method has changed little from the original implementation because it, too, delegates some tasks to other methods.

private IEnumerable<TradeRecord> ParseTrades(IEnumerable<string> tradeData)
{
	var trades = new List<TradeRecord>();
	var lineCount = 1;
	foreach (var line in tradeData)
	{
		var fields = line.Split(new char[] { ',' });

		if (!ValidateTradeData(fields, lineCount))
		{
			continue;
		}

		var trade = MapTradeDataToTradeRecord(fields);

		trades.Add(trade);

		lineCount++;
	}

	return trades;
}

This method delegates validation and mapping responsibilities to other methods. Without this delegation, this section of the process would still be too complex and it would retain too many responsibilities.

private bool ValidateTradeData(string[] fields, int currentLine)
{
	if (fields.Length != 3)
	{
		LogMessage("WARN: Line {0} malformed. Only {1} field(s) found.", currentLine, fields.Length);
		return false;
	}

	if (fields[0].Length != 6)
	{
		LogMessage("WARN: Trade currencies on line {0} malformed: '{1}'", currentLine, fields[0]);
		return false;
	}

	int tradeAmount;
	if (!int.TryParse(fields[1], out tradeAmount))
	{
		LogMessage("WARN: Trade amount on line {0} not a valid integer: '{1}'", currentLine, fields[1]);
		return false;
	}

	decimal tradePrice;
	if (!decimal.TryParse(fields[2], out tradePrice))
	{
		LogMessage("WARN: Trade price on line {0} not a valid decimal: '{1}'", currentLine, fields[2]);
		return false;
	}

	return true;
}

private void LogMessage(string message, params object[] args)
{
	Console.WriteLine(message, args);
}

private TradeRecord MapTradeDataToTradeRecord(string[] fields)
{
	var sourceCurrencyCode = fields[0].Substring(0, 3);
	var destinationCurrencyCode = fields[0].Substring(3, 3);
	var tradeAmount = int.Parse(fields[1]);
	var tradePrice = decimal.Parse(fields[2]);

	var trade = new TradeRecord
	{
		SourceCurrency = sourceCurrencyCode,
		DestinationCurrency = destinationCurrencyCode,
		Lots = tradeAmount / LotSize,
		Price = tradePrice
	};

	return trade;
}

And finally the StoreTrades method

private void StoreTrades(IEnumerable<TradeRecord> trades)
{
	using (var connection = new System.Data.SqlClient.SqlConnection("Data Source=(local);Initial Catalog=TradeDatabase;Integrated Security=True;"))
	{
		connection.Open();
		using (var transaction = connection.BeginTransaction())
		{
			foreach (var trade in trades)
			{
				var command = connection.CreateCommand();
				command.Transaction = transaction;
				command.CommandType = System.Data.CommandType.StoredProcedure;
				command.CommandText = "dbo.insert_trade";
				command.Parameters.AddWithValue("@sourceCurrency", trade.SourceCurrency);
				command.Parameters.AddWithValue("@destinationCurrency", trade.DestinationCurrency);
				command.Parameters.AddWithValue("@lots", trade.Lots);
				command.Parameters.AddWithValue("@price", trade.Price);

				command.ExecuteNonQuery();
			}

			transaction.Commit();
		}
		connection.Close();
	}

	LogMessage("INFO: {0} trades processed", trades.Count());
}

Now if you compare this with the previous implementation it is a clear improvement. However what we really achieved is more readability. This new code is no way more adaptable than the previous code you still need to change the TradeProcessor class for any of the previously mentioned circumstances. To achieve achieve adaptability you need abstraction.

Refactoring for abstraction

In this step we will introduce several abstractions that will allow us to handle any change request for this class. The next task is to split each responsibility into different classes and place them behind interfaces.

 public class TradeProcessor
    {
        public TradeProcessor(ITradeDataProvider tradeDataProvider, ITradeParser tradeParser, ITradeStorage tradeStorage)
        {
            this.tradeDataProvider = tradeDataProvider;
            this.tradeParser = tradeParser;
            this.tradeStorage = tradeStorage;
        }

        public void ProcessTrades()
        {
            var lines = tradeDataProvider.GetTradeData();
            var trades = tradeParser.Parse(lines);
            tradeStorage.Persist(trades);
        }

        private readonly ITradeDataProvider tradeDataProvider;
        private readonly ITradeParser tradeParser;
        private readonly ITradeStorage tradeStorage;
    }

The TradeProcessor class not looks significantly different from previous implementation. It no longer contains the implementation details for the whole process but instead contains the blueprint for the process. This class models the process of transferring trade data from one format to another. This is its only responsibility, its only concern, and the only reason that this class should change. If the process itself changes, this class will change to reflect it. But if you decide you no longer want to retrieve data from a Stream, log on to the console, or store the trades in a database, this class remains as is.

using System.Collections.Generic;
using System.IO;

using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class StreamTradeDataProvider : ITradeDataProvider
    {
        public StreamTradeDataProvider(Stream stream)
        {
            this.stream = stream;
        }

        public IEnumerable<string> GetTradeData()
        {
            var tradeData = new List<string>();
            using (var reader = new StreamReader(stream))
            {
                string line;
                while ((line = reader.ReadLine()) != null)
                {
                    tradeData.Add(line);
                }
            }
            return tradeData;
        }

        private readonly Stream stream;
    }
}
using System.Collections.Generic;

using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class SimpleTradeParser : ITradeParser
    {
        private readonly ITradeValidator tradeValidator;
        private readonly ITradeMapper tradeMapper;

        public SimpleTradeParser(ITradeValidator tradeValidator, ITradeMapper tradeMapper)
        {
            this.tradeValidator = tradeValidator;
            this.tradeMapper = tradeMapper;
        }

        public IEnumerable<TradeRecord> Parse(IEnumerable<string> tradeData)
        {
            var trades = new List<TradeRecord>();
            var lineCount = 1;
            foreach (var line in tradeData)
            {
                var fields = line.Split(new char[] { ',' });

                if (!tradeValidator.Validate(fields))
                {
                    continue;
                }

                var trade = tradeMapper.Map(fields);

                trades.Add(trade);

                lineCount++;
            }

            return trades;
        }
    }
}
using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class SimpleTradeValidator : ITradeValidator
    {
        private readonly ILogger logger;

        public SimpleTradeValidator(ILogger logger)
        {
            this.logger = logger;
        }

        public bool Validate(string[] tradeData)
        {
            if (tradeData.Length != 3)
            {
                logger.LogWarning("Line malformed. Only {0} field(s) found.", tradeData.Length);
                return false;
            }

            if (tradeData[0].Length != 6)
            {
                logger.LogWarning("Trade currencies malformed: '{0}'", tradeData[0]);
                return false;
            }

            int tradeAmount;
            if (!int.TryParse(tradeData[1], out tradeAmount))
            {
                logger.LogWarning("Trade not a valid integer: '{0}'", tradeData[1]);
                return false;
            }

            decimal tradePrice;
            if (!decimal.TryParse(tradeData[2], out tradePrice))
            {
                logger.LogWarning("Trade price not a valid decimal: '{0}'", tradeData[2]);
                return false;
            }

            return true;
        }
    }
}

Now if you refer the back to the list of circumstances, this new version allows you to implement each one without touching the existing classes.

Examples

Scenario 1: Instead of Stream your business team asks you read data from a web service

Solution: Create new implementation for ITradeDataProvider

Scenario 2: A a new field is added to the data format

Solution: Change the implementation for ITradeDataValidator, ITradeDataMapper and ITradeStorage

Scenario 3: The validation rules changes

Solution: Edit the ITradeDataValidator implementation

Scenario 4: Your architect asks you to use document db instead of relation database

Solution: Create new implementation for ITradeStorage

Conclusion

I hope this blog clears your doubts regarding the SRP and convinces you that by combining abstractions via interfaces and continuous refactoring you can make your code more adaptive while also adhering to the Single Responsibility Principle

Reference

Adaptive Code Via C# – Gary Mclean Hall