Pipes and Filters Design Pattern: A Practical Approach

When it comes to data processing, one of the common challenges developers face is how to write maintainable and reusable code. The pipes and filters design pattern is a powerful tool that can help you solve this problem by breaking down a complex data processing task into a series of simple, reusable filters that can be combined in different ways to achieve different results.

In this blog post, we will take a look at how to use the pipes and filters pattern to validate incoming data using C#, We will start by defining the basic building blocks of the pattern, then we will implement a simple example that demonstrates how the pattern works in practice.

Building Blocks

The pipes and filters pattern consists of three main components: filters, pipes, and the pipeline.

  • A filter is simple piece of code that performs a specific task on the input data and returns the result. In our example, we will have two filters: a validation filter that checks if the input data is valid and a transformation filter that converts the input data to uppercase.
  • A pipe is a data structure that connects the output of one filter to the input of another. In our example, we will not use pipes explicitly, but the pipeline will be responsible for connecting the filters together.
  • The pipeline is the main component of the pattern that holds all the filters and connects them together. It is responsible for applying the filters to the input data in the correct order and returning the final result.

Implementing the Pipes and Filters Pattern in C#

Now that we have a basic understanding of the components of the pipes and filters pattern, let’s take a look at how we can implement in C#.

First, we will define and interface for filters called IPipeFilter<T> that has a single method called Process that takes in a input of type T and returns output of type T.

interface IPipeFilter<T>
{
    T Process(T input);
}

Next we will create two filters that implement this interface. The first one is DataValidationFilter that checks if the input data is valid and throws an exception if it is not.

class DataValidationFilter : IPipeFilter<string>
{
    public string Process(string input)
    {
        if (string.IsNullOrWhiteSpace(input))
            throw new Exception("Invalid input data");

        return input;
    }
}

The second filter is DataTransformationFilter that converts the input data to uppercase.

class DataTransformationFilter : IPipeFilter<string>
{
    public string Process(string input)
    {
        return input.ToUpper();
    }
}

Finally, we will create a class called DataProcessingPipeline that takes a list of IPipeFilter<T> as a constructor argument, and it applies each filter in the list to the input data in the order they are provided.

class DataProcessingPipeline<T>: IPipeLine<T> {
  private readonly List <IPipeFilter<T>> _filters;

  public DataProcessingPipeline(List <IPipeFilter<T>> filters) {
    _filters = filters;
  }

  public T Process(T input) {
    foreach(var filter in _filters) {
      input = filter.Process(input);
    }
    return input;
  }
}

with the above classes we are ready to implement the pipeline and use it to validate and transform incoming data.

class Program
{
    static void Main(string[] args)
    {
        var pipeline = new DataProcessingPipeline<string>(new List<IPipeFilter<string>>
        {
            new DataValidationFilter(),
            new DataTransformationFilter()
        });

        try
        {
            var processedData = pipeline.Process("valid input data");
            Console.WriteLine(processedData);
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
    }
}

In this example, we first create an instance of DataProcessingPipeline<string> with a list of filters that contains DataValidationFilter and DataTransformationFilter, Then we apply the pipeline to the input data “valid input data”, the output of this pipeline will be “VALID INPUT DATA”.

Conclusion

The pipes and filters pattern is a powerful tool for breaking down complex data processing tasks into simple, reusable components. It can help you write maintainable and reusable code that is easy to understand and modify. In this blog post, we have seen how to use the pipes and filters pattern to validate incoming data using C#, but this pattern can be used in many other scenarios as well. I hope this example will give you a good starting point for using this pattern in your own projects.

Understanding SOLID Principles: Open/Closed

As beginners we would have all written code that is quite procedural, irrespective of the language we begin with. Beginners tend to use classes as storage mechanisms for methods, regardless of whether those methods truly belong together. There is no/lack of architecture to the code, and there are very few extension points. Any change in the requirement will result in modifying the existing code which could result in regression.

In our previous part we have seen Single Responsibility Principle, which talked about god object and how you should refactor it for clarity. In this post let’s see about Open/Closed principle.

The name Open/Close principle may sound like a oxymoron. But lets look at the definition from Meyer

Software entities should be open for extension, but closed for modification

Bertrand Meyer

Open for extension – This means that the behavior of the module can be extended. As the requirements of the application change, we are able to extend the module with new behaviors that satisfy those changes, In other words, we are able to change what the module does.

Close for modification – Extending the behavior of a module does not result in changes to the sources or binary code of the module. The binary executable version of the module, whether in a linkable library, a DLL, or a Java .jar, remains untouched.

Extension Points

Classes that honor the OCP should be open to extension by containing defined extension points where future functionality can hook into the existing code and provide new behaviors.

If you looked at the code sample that from the Single Responsibility Principle the snippet that you see before refactoring is an example for no extension code.

If you allow changes to existing code there is a higher chance of regression and also when you change an existing interface it will have an impact on the client.

We can provide extension code using following concepts

  • Virtual Methods
  • Abstract Methods
  • Interface

Virtual Methods

If we mark one of the member of class as virtual it becomes an extension. This type of extension is handled via inheritance. When your requirement for an existing class changes, you can just subclass the existing class and without modifying its source code you can change the behavior to satisfy new requirement

Abstract Methods

Abstract is one another OOPS concept which we can use to provide extension points. By declaring a member as abstract you are leaving the implementation details to the inheriting class. Unlike virtual here we are not overriding an existing implementation, but rather delegating the implementation to sub class.

Interface inheritance

The final type of extension point is interface inheritance. Here, the clients dependency on a class is replaced with the interface. Unlike other two methods when it comes to interface all the implementation details are client specific thus offer much more flexible.

Also this helps to keep inheritance hierarchies shallow, with few layers of subclassing.

Closed for change

Design and document for inheritance or else prohibit it

Joshua Bloch

If you are using inheritance then you must be aware that any class can be inherited and can be added with new functionality. But if we are allowing it we must have proper documentation for the class so as to protect and inform future programmers who extend the class.

If you are not expecting a class to be extended its better to restrict the extension by using the keyword sealed.

Conclusion

Knowing that you add extension point is not sufficient, however. You also need to know when this is applicable. Identify the parts of the requirement that are likely to change or that are particularly troublesome to implement. So depending on the specific scenario the code can be rigid or it can be fluid, with myriad extension points.

Reference

Adaptive Code Via C# – Gary Mclean Hall

Understanding SOLID Principles: Single Responsibility

Agile methodology is not just an alternative to more rigid process like waterfall, but a reaction to them. The aim of agile is to embrace change as the necessary part of the contract between client and developer.

If your code is not adaptive enough, Your process cannot be agile enough

UMAMAHESWARAN

When the sole purpose of agile being adaptability, As developers should strive to ensure that their code is maintainable, readable, tested and more importantly adaptive to change. SOLID is the acronym for a set of practices that, when implemented together makes the code adaptive to change.

Each of these principles is a worthy practice by itself that any software developer would do well to learn. When used in collaboration these patterns give code a completely different structure. Lets explore SRP

Single Responsibility Principle

The single responsibility principle (SRP) instructs developers to write code that has one and only one reason to change. If a class has more than one reason to change, it has more than one responsibility . Classes with more than a single responsibility should be broken down into smaller classes, each of which should have only one responsibility and reason to change.

To achieve single responsibility you have to identify classes that have too many responsibilities and use delegation and abstraction to refactor the code to achieve single responsibility.

What do I mean by one reason to change? Lets look at an example of a TradeProcessor to better explain the problem.

namespace SalesProcessor
{
	public class TradeProcessor
	{
		public void ProcessTrades(Stream stream)
		{
			// read rows
			var lines = new List<string>();
			using (var reader = new StreamReader(stream))
			{
				string line;
				while ((line = reader.ReadLine()) != null)
				{
					lines.Add(line);
				}
			}

			var trades = new List<TradeRecord>();

			var lineCount = 1;
			foreach (var fields in lines.Select(line => line.Split(new[] { ',' })))
			{
				if (fields.Length != 3)
				{
					WriteLine("WARN: Line {0} malformed. Only {1} field(s) found.", lineCount, fields.Length);
					continue;
				}

				if (fields[0].Length != 6)
				{
					WriteLine("WARN: Trade currencies on line {0} malformed: '{1}'", lineCount, fields[0]);
					continue;
				}

				if (!int.TryParse(fields[1], out var tradeAmount))
				{
					WriteLine("WARN: Trade amount on line {0} not a valid integer: '{1}'", lineCount, fields[1]);
				}

				if (!decimal.TryParse(fields[2], out var tradePrice))
				{
					WriteLine("WARN: Trade price on line {0} not a valid decimal: '{1}'", lineCount, fields[2]);
				}

				var sourceCurrencyCode = fields[0].Substring(0, 3);
				var destinationCurrencyCode = fields[0].Substring(3, 3);

				// calculate values
				var trade = new TradeRecord
				{
					SourceCurrency = sourceCurrencyCode,
					DestinationCurrency = destinationCurrencyCode,
					Lots = tradeAmount / LotSize,
					Price = tradePrice
				};

				trades.Add(trade);

				lineCount++;
			}

			using (var connection = new SqlConnection("Data Source=(local);Initial Catalog=TradeDatabase;Integrated Security=True;"))
			{
				connection.Open();
				using (var transaction = connection.BeginTransaction())
				{
					foreach (var trade in trades)
					{
						var command = connection.CreateCommand();
						command.Transaction = transaction;
						command.CommandType = System.Data.CommandType.StoredProcedure;
						command.CommandText = "dbo.insert_trade";
						command.Parameters.AddWithValue("@sourceCurrency", trade.SourceCurrency);
						command.Parameters.AddWithValue("@destinationCurrency", trade.DestinationCurrency);
						command.Parameters.AddWithValue("@lots", trade.Lots);
						command.Parameters.AddWithValue("@price", trade.Price);

						command.ExecuteNonQuery();
					}

					transaction.Commit();
				}
				connection.Close();
			}

			WriteLine("INFO: {0} trades processed", trades.Count);
		}

		private static float LotSize = 100000f;
	}
	internal class TradeRecord
	{
		internal string DestinationCurrency;
		internal float Lots;
		internal decimal Price;
		internal string SourceCurrency;
	}
}


This class is trying to achieve following

  1. It reads every line from a Stream and stores each line in a list of strings.
  2. It parses out individual fields from each line and stores them in a more structured list of Trade-Record instances.
  3. The parsing includes some validation and some logging to the console.
  4. Each TradeRecord is enumerated, and a stored procedure is called to insert the trades into a database

The responsibilities of the TradeProcessor are reading streams, parsing string, validating fields, logging and database insertion. The SRP states that this class should only have single reason to change, However the reality of the TradeProcessor is that it will change under the following circumstances

  • When you decide not to use a Stream for input but instead read the trades from a remote call to a web service.
  • When the format of the input data changes, perhaps with the addition of an extra field indicating the broker for the transaction
  • When the validation rules of the input data change
  • When the way in which you log warnings, errors and information changes. If you are using a hosted web service, writing to the console would not be a viable option.
  • When the database changes in some way — perhaps the insert_trade stored procedure requires a new parameter for the broker, too, or you decide not to store the data in a relation database and opt for document storage or the database is moved behind a web service that you must call.

For each of these changes, this class would have to be modified.

Refactoring for clarity

This class not only has too many responsibilities it has a single method that has too many responsibilities, So first you split this method into multiple methods.

public void ProcessTrades(Stream stream)
{
	var lines = ReadTradeData(stream);
	var trades = ParseTrades(lines);
	StoreTrades(trades);
}

Let’s looks at ReadTradeData,

private IEnumerable<string> ReadTradeData(Stream stream)
{
	var tradeData = new List<string>();
	using (var reader = new StreamReader(stream))
	{
		string line;
		while ((line = reader.ReadLine()) != null)
		{
			tradeData.Add(line);
		}
	}
	return tradeData;
}

This is exactly the same code that we have in the original code, but it simply been encapsulated in a method which returns list of string.

Let’s look at ParseTrades method

This method has changed little from the original implementation because it, too, delegates some tasks to other methods.

private IEnumerable<TradeRecord> ParseTrades(IEnumerable<string> tradeData)
{
	var trades = new List<TradeRecord>();
	var lineCount = 1;
	foreach (var line in tradeData)
	{
		var fields = line.Split(new char[] { ',' });

		if (!ValidateTradeData(fields, lineCount))
		{
			continue;
		}

		var trade = MapTradeDataToTradeRecord(fields);

		trades.Add(trade);

		lineCount++;
	}

	return trades;
}

This method delegates validation and mapping responsibilities to other methods. Without this delegation, this section of the process would still be too complex and it would retain too many responsibilities.

private bool ValidateTradeData(string[] fields, int currentLine)
{
	if (fields.Length != 3)
	{
		LogMessage("WARN: Line {0} malformed. Only {1} field(s) found.", currentLine, fields.Length);
		return false;
	}

	if (fields[0].Length != 6)
	{
		LogMessage("WARN: Trade currencies on line {0} malformed: '{1}'", currentLine, fields[0]);
		return false;
	}

	int tradeAmount;
	if (!int.TryParse(fields[1], out tradeAmount))
	{
		LogMessage("WARN: Trade amount on line {0} not a valid integer: '{1}'", currentLine, fields[1]);
		return false;
	}

	decimal tradePrice;
	if (!decimal.TryParse(fields[2], out tradePrice))
	{
		LogMessage("WARN: Trade price on line {0} not a valid decimal: '{1}'", currentLine, fields[2]);
		return false;
	}

	return true;
}

private void LogMessage(string message, params object[] args)
{
	Console.WriteLine(message, args);
}

private TradeRecord MapTradeDataToTradeRecord(string[] fields)
{
	var sourceCurrencyCode = fields[0].Substring(0, 3);
	var destinationCurrencyCode = fields[0].Substring(3, 3);
	var tradeAmount = int.Parse(fields[1]);
	var tradePrice = decimal.Parse(fields[2]);

	var trade = new TradeRecord
	{
		SourceCurrency = sourceCurrencyCode,
		DestinationCurrency = destinationCurrencyCode,
		Lots = tradeAmount / LotSize,
		Price = tradePrice
	};

	return trade;
}

And finally the StoreTrades method

private void StoreTrades(IEnumerable<TradeRecord> trades)
{
	using (var connection = new System.Data.SqlClient.SqlConnection("Data Source=(local);Initial Catalog=TradeDatabase;Integrated Security=True;"))
	{
		connection.Open();
		using (var transaction = connection.BeginTransaction())
		{
			foreach (var trade in trades)
			{
				var command = connection.CreateCommand();
				command.Transaction = transaction;
				command.CommandType = System.Data.CommandType.StoredProcedure;
				command.CommandText = "dbo.insert_trade";
				command.Parameters.AddWithValue("@sourceCurrency", trade.SourceCurrency);
				command.Parameters.AddWithValue("@destinationCurrency", trade.DestinationCurrency);
				command.Parameters.AddWithValue("@lots", trade.Lots);
				command.Parameters.AddWithValue("@price", trade.Price);

				command.ExecuteNonQuery();
			}

			transaction.Commit();
		}
		connection.Close();
	}

	LogMessage("INFO: {0} trades processed", trades.Count());
}

Now if you compare this with the previous implementation it is a clear improvement. However what we really achieved is more readability. This new code is no way more adaptable than the previous code you still need to change the TradeProcessor class for any of the previously mentioned circumstances. To achieve achieve adaptability you need abstraction.

Refactoring for abstraction

In this step we will introduce several abstractions that will allow us to handle any change request for this class. The next task is to split each responsibility into different classes and place them behind interfaces.

 public class TradeProcessor
    {
        public TradeProcessor(ITradeDataProvider tradeDataProvider, ITradeParser tradeParser, ITradeStorage tradeStorage)
        {
            this.tradeDataProvider = tradeDataProvider;
            this.tradeParser = tradeParser;
            this.tradeStorage = tradeStorage;
        }

        public void ProcessTrades()
        {
            var lines = tradeDataProvider.GetTradeData();
            var trades = tradeParser.Parse(lines);
            tradeStorage.Persist(trades);
        }

        private readonly ITradeDataProvider tradeDataProvider;
        private readonly ITradeParser tradeParser;
        private readonly ITradeStorage tradeStorage;
    }

The TradeProcessor class not looks significantly different from previous implementation. It no longer contains the implementation details for the whole process but instead contains the blueprint for the process. This class models the process of transferring trade data from one format to another. This is its only responsibility, its only concern, and the only reason that this class should change. If the process itself changes, this class will change to reflect it. But if you decide you no longer want to retrieve data from a Stream, log on to the console, or store the trades in a database, this class remains as is.

using System.Collections.Generic;
using System.IO;

using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class StreamTradeDataProvider : ITradeDataProvider
    {
        public StreamTradeDataProvider(Stream stream)
        {
            this.stream = stream;
        }

        public IEnumerable<string> GetTradeData()
        {
            var tradeData = new List<string>();
            using (var reader = new StreamReader(stream))
            {
                string line;
                while ((line = reader.ReadLine()) != null)
                {
                    tradeData.Add(line);
                }
            }
            return tradeData;
        }

        private readonly Stream stream;
    }
}
using System.Collections.Generic;

using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class SimpleTradeParser : ITradeParser
    {
        private readonly ITradeValidator tradeValidator;
        private readonly ITradeMapper tradeMapper;

        public SimpleTradeParser(ITradeValidator tradeValidator, ITradeMapper tradeMapper)
        {
            this.tradeValidator = tradeValidator;
            this.tradeMapper = tradeMapper;
        }

        public IEnumerable<TradeRecord> Parse(IEnumerable<string> tradeData)
        {
            var trades = new List<TradeRecord>();
            var lineCount = 1;
            foreach (var line in tradeData)
            {
                var fields = line.Split(new char[] { ',' });

                if (!tradeValidator.Validate(fields))
                {
                    continue;
                }

                var trade = tradeMapper.Map(fields);

                trades.Add(trade);

                lineCount++;
            }

            return trades;
        }
    }
}
using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class SimpleTradeValidator : ITradeValidator
    {
        private readonly ILogger logger;

        public SimpleTradeValidator(ILogger logger)
        {
            this.logger = logger;
        }

        public bool Validate(string[] tradeData)
        {
            if (tradeData.Length != 3)
            {
                logger.LogWarning("Line malformed. Only {0} field(s) found.", tradeData.Length);
                return false;
            }

            if (tradeData[0].Length != 6)
            {
                logger.LogWarning("Trade currencies malformed: '{0}'", tradeData[0]);
                return false;
            }

            int tradeAmount;
            if (!int.TryParse(tradeData[1], out tradeAmount))
            {
                logger.LogWarning("Trade not a valid integer: '{0}'", tradeData[1]);
                return false;
            }

            decimal tradePrice;
            if (!decimal.TryParse(tradeData[2], out tradePrice))
            {
                logger.LogWarning("Trade price not a valid decimal: '{0}'", tradeData[2]);
                return false;
            }

            return true;
        }
    }
}

Now if you refer the back to the list of circumstances, this new version allows you to implement each one without touching the existing classes.

Examples

Scenario 1: Instead of Stream your business team asks you read data from a web service

Solution: Create new implementation for ITradeDataProvider

Scenario 2: A a new field is added to the data format

Solution: Change the implementation for ITradeDataValidator, ITradeDataMapper and ITradeStorage

Scenario 3: The validation rules changes

Solution: Edit the ITradeDataValidator implementation

Scenario 4: Your architect asks you to use document db instead of relation database

Solution: Create new implementation for ITradeStorage

Conclusion

I hope this blog clears your doubts regarding the SRP and convinces you that by combining abstractions via interfaces and continuous refactoring you can make your code more adaptive while also adhering to the Single Responsibility Principle

Reference

Adaptive Code Via C# – Gary Mclean Hall

Process trillions of events per day using C#

Let’s be real, processing trillions of events per day can be challenging in any kind of framework/language. The fact that you can do this using the language that you know, and love can be really tempting.

In my previous job I have worked on a project for handling thousands of business events, in addition to storing the events we wanted the ability to search those events and create analytics for those events. Which can be very challenging especially when you want to scale out to millions/billion events per day.

What is Trill?

Trill is a high performance one pass, in-memory streaming analytics engine. It can handle both real-time and offline data and is based on a temporal data and query model. Trill can be used as a streaming engine or a lightweight in-memory relational engine and as a progressive query processor or early query results on partial data.

Internally trill has been used by developers working on Azure Stream Analytics, Bing ads and even Halo.

So seeing this go open source is really incredible!!

How to get started?

Trill is a single-node engine library, any .NET application, service or platform can easily use Trill and start processing queries.

Let’s see some code

IStreamable<Empty, SensorReading> inputStream;

This is the primary interface for creating streamable operations

Some sample for creating an input stream.

private static IObservable < SensorReading > SimulateLiveData() {
 return ToObservableInterval(HistoricData, TimeSpan.FromMilliseconds(1000));
}

private static IObservable < T > ToObservableInterval < T > (IEnumerable < T > source, TimeSpan period) {
 return Observable.Using(
  source.GetEnumerator,
  it => Observable.Generate(
   default (object),
   _ => it.MoveNext(),
   _ => _,
   _ => {
    Console.WriteLine("Input {0}", it.Current);
    return it.Current;
   },
   _ => period));
}

private static IStreamable < Empty, SensorReading > CreateStream(bool isRealTime) {
 if (isRealTime) {
  return SimulateLiveData()
   .Select(r => StreamEvent.CreateInterval(r.Time, r.Time + 1, r))
   .ToStreamable();
 }

 return HistoricData
  .ToObservable()
  .Select(r => StreamEvent.CreateInterval(r.Time, r.Time + 1, r))
  .ToStreamable();
}

Now that we have a stream of events let’s try to add some logic to validate the events. We need to write a query to detect when a threshold is crossed upwards.

// The query is detecting when a threshold is crossed upwards.
const int threshold = 42;

var crossedThreshold = inputStream.Multicast(
 input => {
  // Alter all events 1 sec in the future.
  var alteredForward = input.AlterEventLifetime(s => s + 1, 1);

  // Compare each event that occurs at input with the previous event.
  // Note that, this one works for strictly ordered, strictly (e.g 1 sec) regular streams.
  var filteredInputStream = input.Where(s => s.Value > threshold);
  var filteredAlteredStream = alteredForward.Where(s => s.Value < threshold);
  return filteredInputStream.Join(
   filteredAlteredStream,
   (evt, prev) => new {
    evt.Time, Low = prev.Value, High = evt.Value
   });
 });

That’s it, now you can just listen to the crossedThreshold event and print the value whether the event occurs.

In the output below you can see that when the threshold is crossed it is being captured and printed at the end.

Conclusion

The best part about Trill is it’s just a library. So it will run within a process on any computer but can spawn multiple threads for parallel processing if configured to do so. To span multiple nodes you can use Orleans or Azure Stream Analytics etc.

Resources

gRPC using C#: a fresh new alternative to build APIs

At this point most of you might have already came across this term called “gRPC”. gRPC is a modern open source high performance RPC framework that can run in any environment. Which makes it suitable for building micro services.

Personally, I’d been intrigued by gRPC and its potential to completely change how we are building the API’s today.

“Once a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road.”

Stewart Brand, Writer

REST

REST is by far the most popular way of building web apis. Since it’s easy to understand, and the fact that it uses existing HTTP infrastructure made it easy for developers to build APIs using the high quality HTTP implementation that is available in every language.

But one of the biggest problem with REST is that there is no formal API contract. REST is is not a silver bullet. Sometimes you would want an RPC-style service where the operations are too difficult to model.

What’s gRPC

gRPC is a free and open-source framework developed by Google. gRPC is part of the Cloud Native Computation Foundation like Docker & Kubernetes for example.

At a high level, it allows you to define REQUEST and RESPONSE for RPC and handles all the rest for you. On top of it, it’s modern, fast and efficient, build on top of HTTP/2, low latency, supports streaming, language independent and makes it super easy to plug in authentication, load balancing, logging and monitoring.

What’s an RPC?

An RPC is a Remote Procedure Call. In your CLIENT code, it looks like you’re just calling  a function directly on the SERVER. However RPC is not a new concept(CORBA) had this before. With gRPC, it’s implemented very cleanly and solves a lot of problems.

If you want to learn more about the internals of the gRPC please visit grpc.io.

Okay that’s all fine what are all the languages that supports gRPC. At the time of writing this article its supported in about 11 languages which covers most of the popular languages like Java, Python, JavaScript and ofcourse C#.

gRPC using C#

gRPC uses a contract-first approach to API development. Services and messages are defined in *.proto files.

Lets get started by creating a new project in Visual Studio.

Once you created the project expand the solution explorer and explore the solution structure. It doesn’t look that different from a regular ASP.NET Core Service except few items

In the solution explorer you can see that the packages section has a package Grpc.AspNetCore

Also we have a greet.proto file. These proto files are used to define services and messages

syntax = "proto3";

service Greeter {
  rpc SayHello (HelloRequest) returns (HelloReply);
}

message HelloRequest {
  string name = 1;
}

message HelloReply {
  string message = 1;
}

Wait a sec!! Now why does that look familiar. If you have worked on WSDL you would see this and realize that this is not that different. Infact you can consider gRPC as a better version of SOAP.

.NET types for services, clients and messages are automatically generated by including *.proto files in a project.

gRPC services on ASP.NET Core

gRPC services can be hosted on ASP.NET Core. Services have full integration with popular ASP.NET Core features such as logging, dependency injection(DI), authentication and authorization.

The gRPC service project template provides a starter service

 public class GreeterService : Greeter.GreeterBase
    {
        private readonly ILogger<GreeterService> _logger;
        public GreeterService(ILogger<GreeterService> logger)
        {
            _logger = logger;
        }

        public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext context)
        {
            return Task.FromResult(new HelloReply
            {
                Message = "Hello " + request.Name
            });
        }
    }

GreeterService inherits from the GreeterBase type, which is generated from the Greeter service in the *.proto file. The service is made accessible to clients in Startup.cs:

app.UseEndpoints(endpoints =>
{
    endpoints.MapGrpcService<GreeterService>();
});

gRPC Client

To consume the gRPC service that was created as default by Visual Studio lets create a .Net Core Console application to consume it

Once you created the .Net Core Console application. Install the following packages

Install-Package Grpc.Net.Client
Install-Package Google.Protobuf
Install-Package Grpc.Tools

Add greet.proto

Create a Protos folder in the gRPC client project

Copy the Protos\greet.proto file from the gRPC Greeter service to the gRPC client project.

Edit the client project *.csproj file

Add an item group with a <ProtoBuf> element that refers to the greet.proto file

<ItemGroup>
  <Protobuf Include="Protos\greet.proto" GrpcServices="Client" />
</ItemGroup>

Create the greeter client

Build the project to create the types in the namespace. The types are generated automatically by the build process.

Update the gRPC client Program.cs file with the following code:

using Grpc.Net.Client;
using System;
using System.Threading.Tasks;

namespace gRPCHelloWorld.Client
{
    class Program
    {
        static async Task Main(string[] args)
        {
            // The port number(5001) must match the port of the gRPC server.
            var channel = GrpcChannel.ForAddress("https://localhost:5001");
            var client = new Greeter.GreeterClient(channel);
            var reply = await client.SayHelloAsync(
                              new HelloRequest { Name = "GreeterClient" });
            Console.WriteLine("Greeting: " + reply.Message);
            Console.WriteLine("Press any key to exit...");
            Console.ReadKey();
        }
    }
}

Program.cs contains the entry point and logic for the gRPC client.

The greeter client is created by:

  • Instantiating an HttpClient containing the information for creating the connection to the gRPC service.
  • Using the HttpClient to construct a gRPC channel and the greeter client

Test the gRPC client

  • In the Greeter service, press Ctrl+F5 to start the server without the debugger.
  • In the Greeter Client project, Press Ctrl+F5 to start the client without the debugger

The client sends a greeting to the service with a message containing its name GreeterClient. The service sends the message “Hello GreeterClient” as a response. The “Hello GreeterClient” response is displayed in the command prompt:

Greeting: Hello GreeterClient
Press any key to exit...

Conclusion

gRPC has a lot of potential for becoming a defacto standard for building WEB APIs in near future.

What is this new .NET standard thingy?

If you are .NET developer before 5 years life was simple. If someone asked you what is your domain you would reply “I’m a .NET developer”. It’s not the case anymore. Now we have several .NET and its kind of confusing for any new developers to come in to .NET landscape and pick one.

Background

Initially we had only this .NET Framework. Which is kind of a bigger framework and it allows us to create mobile apps, desktop apps and web apps that run only on Windows PCs, devices and servers

dotnet standard 001

And then around 2011 we had XAMARIN which kind of brought the power and productivity of .NET to iOS and Andriod by reusing skills and code while giving access to the native APIs and performance. But it was going on in a different track till Microsoft acquired XAMARIN on 2016.

dotnet standard 002

And then we have the new .NET  Core which gives you a blazing fast and modular platform for creating server application that run on Windows, Linux and Mac.

dotnet standard 003

So we have all these platforms and different app models like WPF, ASP.NET, IOS, Android etc. that kind of makes sense since some run on web and sum run mobile etc, but when we look at the Base Class Libraries(BCL) for us developers doesn’t not make any sense at all. Since we will be using these base libraries to develop our applications.

dotnet standard  004.PNG

So what are all the problems with this

  • Difficult to reuse skills
    • Need to master all 2 base class libraries
  • Difficult to reuse code
    • Need to target a fairly small common denominator
  • Difficult to innovate
    • We need implementation on each platform

And this is where .NET Standard come in. The Idea is that there are places where it makes sense to have different app models running in different places doing different things However the Standard Base Library should be common. So the Idea is to take all the stuff thats there in each of these frameworks and put it all into one single container called .NET Standard.dotnet standard  005.PNG

What exactly is a .NET Standard?

.NET Standard is just a specification. It’s not something that you would install along with other .NET frameworks. It’s basically a set of APIs that all .NET platform have to implement.

To put this in a better context let me explain it from a web developer perspective. Lets take HTML specification, It’s designed by W3C and all browsers like Chrome, Edge, Firefox etc has to implement in their respective browsers.

.NET Standard current version is 2.0 and it has support 20K more APIs than .NET standard 1.x. And already 70% of Nuget packages are .NET standard complaint.

If you already have a .NET Library published in Nuget and if you haven’t migrated to .NET Standard don’t worry! You library will be still be accessible to developers who are doing .NET Standard and you don’t have to recompile your dll or anything.

Conclusion

If you are .NET developer For all your new projects I think we should start developing against .Net Standard while Microsoft works on unifying the different BCL libraries at the background.

Please stay tuned for more .net core stuff. You can also check out my article on how to create dotnet console application in 3 steps.

How to customize Swagger UI in ASP.NET Web API

Documentation is one of the most important thing to any WEB API. That is going to be the single point of reference for the developers who are going to consume your APIs. An API with bad documentation is never going to get popular among developers. If creating documentation is one tedious process maintaining the documentation is completely different nightmare until now. So here comes the swagger as the savior for the above problems.

What is swagger?

Swagger is one of the most popular API tooling. It helps the developers to design, build and document their API using a swagger definition file.

And that’s not all…

Once you have described your API in a swagger, you’ve opened a treasure chest full of swagger based tools including client generator tools which you can use to generated consumer/client application in a variety of platforms.

Cool. But how can I use it in my ASP.NET WEB API? Noworries! it’s as simple as adding a NuGet package called Swashbuckle to you project.

Create new project

  • Open Visual Studio, If you don’t have one you should go and download now. After all the Visual Studio community edition is free of cost.
  • Once opened Click File->New->New Project

    swaggerui_001_create_project

  • Select ASP.NET Web Application (.NET Framework) and click Ok

    swaggerui002_create_project

  • Then Select Web API Template and click Ok.

Import Swashbuckle

  • Open the Nuget Package manager for the project
  • And browse for the package “Swashbuckle” and install the package. If you are using ASP.NET Core then you need to install Swashbuckle.AspNetCore
  • Start the Web API
  • Now if you browse to <your-root-url>/swagger you should see the swagger documentation like below

Customizing the UI

The UI isn’t bad but sometimes you may want to customize things like including your company name changing font & colors etc. The good thing about the Swashbuckle is it has few extension points that we can use to customize the look and feel.

If you open SwaggerConfig.cs file under App_Start folder you can see that all the configuration that is related to the swagger is present.

Things you can customize in Swashbuckle

  • Stylesheet
  • Javascript
  • HTML
  • Submit methods, Boolean values, etc

Customizing index.html

You can inject your own index.html using this template. Using this you can customize things like editing your company name removing things like api_key input and explore buttons.

  • To do this first you need add an index.html to your project under Content.

    swaggerui005_added index file

  • In Solution Explorer, right click the file and open its properties window. Change the “Build Action” to “Embedded Resource”

    swaggerui006_embedded resource

Once this is done you can open SwaggerConfig.cs and search for EnableSwaggerUi under that you can uncomment line starts with c.CustomAsset(“index”, and you need to change the parameters based on your project. For example if you project name is ECommerce and you have placed the index.html under Content folder. The line should look like below

c.CustomAsset("index", thisAssembly, "ECommerce.Content.index.html");

swaggerui004_enable swagger ui

To customize the styles you can visit this github project and get any css files based on your liking and add them to the project same as you did with index.html.

Once the CSS is added and it is made as an embedded resource. You should edit the SwaggerConfig.cs again like below

c.InjectStylesheet(thisAssembly, "ECommerce.Content.material-theme.css");

Once you done this your swagger UI should look like below

swaggerui007_final view

Please visit the office github link to know more about swashbuckle

Happy coding!!

 

 

 

 

 

 

 

 

 

 

Common mistakes that C# noobs do

Lets face it! all of us when we  initially started programming have done some silly mistakes without understanding the language or a language feature properly. Most of  us C# developers learn these mistakes the hard way. And it’s part of the journey of any beginner level developer in their career. But at the same time it doesn’t mean every one have to learn the hard way. As the saying goes “Standing on the shoulders of giants” we see more farther than our predecessors, not because we have a better understanding, but because we are lifted up and borne aloft on their experience.

As developers, along with developing software we should also develop our skills to better ourselves and to avoid mistakes that we did in the past. So here I have compiled some of the common mistakes that beginner level C# developers do to help you avoid them in future. Please fee free to comment below some of your experience on the topic

Use interfaces properly

I have seen several beginners who are not just new to C# developers but to programming itself do this, where they will declare an interface and will derive the interface in a class but they wont use the interface during instantiation. Don’t understand? let me explain with code!

Lets see a typical interface declaration

public interface IMonitor
{
void Configure();
void Start();
void Stop();
}

public class ActivityMonitor:IMonitor
{
public void Configure()
{
//some code
}
public void Start()
{
//some code
}
public void Stop()
{
//some code
}
}

Now comes the important part of instantiating an object

var wrongUse = new ActivityMonitor();
IMonitor correctUse = new ActivityMonitor();

In the above sample both statements are valid but there is one key difference. The first statement instantiates an ActivityMonitor Object directly which would work but if you try to assign a different implementation of IMonitor to the object it will fail in the compilation step.

var wrongUse = new ActivityMonitor();
wrongUse= new OtherMonitor(); // error
IMonitor correctUse = new ActivityMonitor();
correctUse = new OtherMonitor(); // works

This is because if you want the polymorphism to occur you have to use interface type when you declare an instance.

Know your defaults

In C# value types are not null be it an int or DateTime or any other types which inherits struct. On other hand a reference can always be null.

Value types can be made null only if you declared  it as an nullable(?) type explicitly. For example

static List<int> alist;
static DateTime startTime;
private static void Main(string[] args)
{
Console.WriteLine(alist == null); //Prints true
Console.WriteLine(startTime == null);//Prints false
}

if you want to know the default of some type please use default() keyword.

Reference types vs Value Types

If you don’t know the type that you are using is a value type or reference type you will run to issues constantly. Because when you assign one value to other it makes a copies the value. But reference types copies the reference alone any changes made to the newer value will change both the variables

Point point1 = new Point(10, 20);
Point point2 = point1;
point2.X = 50;
Console.WriteLine(point1.X); // 10 (does this surprise you?)
Console.WriteLine(point2.X); // 50

Pen pen1 = new Pen(Color.Black);
Pen pen2 = pen1;
pen2.Color = Color.Blue;
Console.WriteLine(pen1.Color); // Blue (or does this surprise you?)
Console.WriteLine(pen2.Color); // Blue

The answer is always look at the type of the variable if it is a struct then its a value type, if its a class then its a reference type.

Start Loving LINQ

C#3.0 was introduced on almost a decade ago on 2007. One of the most important feature on that release is LINQ(Language integrated query). It has fundamentally changed how we manipulate collections on C# with its SQL like syntax. But for some reasons lot of beginners find it difficult to get a grasp of LINQ because of its strange syntax.

Lot of programmers also think that its only use is in code that queries database. Even though database querying is one of primary uses of LINQ, it can work with any collection which implements IEnumerable. So far example, if you had an array of Activities, instead of writing

var sum = 0;
foreach (var activity in activities)
{
if (activity.IsRun)
sum += activity.Count;
}

you could just write

var sum = (from account in activities
where account.IsRun
select account.Count).Sum();

I know this is a simple example but with the power of LINQ you can easily replace dozens of statements with a single LINQ statement in an iterative loop in your code. There are also few things that you need to be aware of there could be trade off in certain scenarios when it comes to performance. I would suggest use LINQ when you can predict amount of data that you will iterate through. And always do a performance comparison with normal for loop and LINQ.

Stop nesting exceptions

I’m guilty of doing this in the initial part of my career in the name of exception of handling. beginners tend to add try catch to every single method that they right and most of these exception block will have throw at the catch block. In some cases they may add a log there too. but if all you are going to do is throw back to the caller why catch it in the first place?

public class DeliveryComponent
{
public void Order()
{
try
{
Pay();
}
catch (Exception ex)
{

}
}
private void Pay()
{
try
{
DoTransaction();
}
catch (Exception ex)
{
throw;
}
}
private void DoTransaction()
{
try
{
//some code
}
catch (Exception ex)
{
throw;
}
}
}

More than that this will add a performance overhead to the program. Most of the time it is enough to put the try-catch in the upper level of the function like below.

You will want to handle exception in the lower only if you want to explicitly handle the exception and do some operations like retrying, logging etc.

public class DeliveryComponent
{
public void Order()
{
try
{
Pay();
}
catch (Exception)
{
// ignored
}
}
private void Pay()
{
DoTransaction();
}
private void DoTransaction()
{
//some code
}
}

Use using statement whenever possible

Memory management is a huge problem in programming When you are using resources like Sql connection, File streams, Network socket etc. C# provides a convenient way to call the Dispose method whenever you are finished with an object and avoid memory leaks.

In efficient way

var con = new SqlConnection("");
con.Open();
//some operation
con.Close();

Efficient way

using (var con = new SqlConnection(""))
{
con.Open();
}

Use constraints on your generics

Generics are one of the coolest features of C#. Some beginner level devs may use generics but have no idea about how to put constraints on the generics so that your generic type will not be misused.

For example consider the below example

public interface IActivityRepository<T>
{
bool Insert(T activity);
bool Update(T activity);
T Get(int id);
bool Delete(int id);
}

From the code you can see IActivityRepository accepts any type and tries to insert them in database. But its obvious that IActivityRepository expects a reference type. but some developer may try to do the following

public class ValueActivityRepository : IActivityRepository<int>
{
public bool Insert(int activity)
{
//some code
}

public bool Update(int activity)
{
//some code
}

public int Get(int id)
{
//some code
}

public bool Delete(int id)
{
//some code
}
}

As you can see this is not the intended purpose of that interface. So you can add constraints to make sure the right kind of type is substituted as T.

public interface IActivityRepository<T>
where T : class, IActivity, new()
{
bool Insert(T activity);
bool Update(T activity);
T Get(int id);
bool Delete(int id);
}

The above code makes sure that the generic type T must be a implementation of IActivity

Exceptions: Let’s not push it under the rug. Be Explicit

C# is a statically typed language. This allows C# to pin point errors during the compilation step itself where a faulty type conversion will be detected much quickly. when you are doing explicit type conversion in C# you have two ways to follow. One will throw an exception on error and one will not.

Lets see them

object account = new CurrentAccount();
//METHOD 1
SavingsAccount account2 =(SavingsAccount)account;

//METHOD 2
SavingsAccount account3 = account as SavingsAccount;

In the above example the first method will throw an exception immediately. But the second will just assign account3 with null and which can possibly be consumed by some other method and will throw a NullReferenceException which possible surface at much later time which makes is difficult to track down.

Conclusion

C# is a very powerful and flexible language which supports multiple paradigm of programming. With any tool or language which is as powerful as C# there is always going to be caveats. The best thing that we can do is to learn from your mistakes and avoiding these common problems will make you a better programmer.

Please comment below if you want to add any other common mistakes that C# devs do and also check out my other articles on C#

A look at C# dynamic type

Even though dynamic keyword was introduced in C# 4.0. I have never used it so far in any of my code. I have seen some of colleagues use it, but I was always able to point out how we can do the same thing static types.

I always thought dynamic typing is a bit like unsafe code. Many developers will have no need for it, or will use very rarely. As far as I have researched it looks like it’ll give a huge productivity boost for the developers dealing with Microsoft Office, either by making their existing code simpler or by allowing radically different approaches to solve their problems. I personally never had an experience in working with Microsoft Office APIs or just COM in general. But you can’t predict what novel uses the community may come up with in the future. So its always good to learn at-least a little bit about any technologies Just in Case. With that lets dive in to dynamic type.

What is dynamic type?

In C# dynamic is a type which helps us bye pass compile time type checking. Instead types will be resolved at run time. You would declare dynamic type just like you would declare any other types in C# there will be no special syntax. Below are main rules of dynamic

  • Any CLR type can be converted to dynamic implicitly
  • Any expression which results in a dynamic type can be converted to CLR type implicitly
  • Expression that use a value of type dynamic are usually evaluated dynamically
  • The static type of a dynamically evaluated expression is usually deemed to be dynamic

Below example demonstrates all the above points

 // CLR type converts to dynamic type
 // using implicit conversion
 dynamic items = new List<string>
 {
 "First",
 "Second",
 "Third"
 };
 dynamic valueToAdd = "!";
 foreach (var item in items)
 {
 //dynamic type converts to CLR type
 //implicitly without any cast
 string result = item + valueToAdd;
 WriteLine(result);
 }
//Outputs
//=======
//First!
//Second!
//Third!

Now what would happen if you wanted to add an integer instead of string with the List<string>

// CLR type converts to dynamic type
 // using implicit conversion
 dynamic items = new List<string>
 {
 "First",
 "Second",
 "Third"
 };
 dynamic valueToAdd = 2;
 foreach (var item in items)
 {
 //dynamic type converts to CLR type
 //implicitly without any cast
 string result = item + valueToAdd; //Concatenation
 WriteLine(result);
 }
//Outputs
//=======
//First2
//Second2
//Third2

If you were using static typing you would have needed to change the declaration of valueToAdd from string to int. What if you changed the items to be integers as well? Let’s try that one simple change, as shown in the following listing.

// CLR type converts to dynamic type
// using implicit conversion
dynamic items = new List<int> { 1, 2, 3 };
dynamic valueToAdd = 2;
foreach (var item in items)
{
//dynamic type converts to CLR type
//implicitly without any cast
string result = item + valueToAdd;
WriteLine(result);
}

Oops! You’re still trying to convert the result of the addition to a string. The only conversions that are allowed are the same ones that are present in c# normally. So there’s no conversion from int to string. The result is a run time exception.

RunTimeBinderException is the new NullReferenceException

If you are going to use dynamic type lot you are bound come across the exception RunTimeBinderException. In the previous example anyways you can fix it by changing the result type to dynamic, so that the conversion isn’t required.

You can write the same example with all the variables as dynamic like below

 dynamic items = new List<int> { 1, 2, 3 };
 dynamic valueToAdd = 2;
 foreach (var item in items)
 {
 WriteLine(item + valueToAdd);
 }

Conclusion

There are several open source projects which uses dynamic typing to great effect are Massive by Rob Connery, Dapper, Json.Net.

Other areas where dynamic types would be helpful is Working with Excel, calling into Python, and using normal Managed .NET types in a more flexible way.

Reference

C# In Depth by Jon Skeet

My thoughts on using regions in C#

Coding practices are such a divisive topic in any software team. Each of us will have our own preferences. But when we are in a team we need to bend our preferences a little, to accommodate the generally accepted coding practices of that team.

Everyone will not agree on particular coding practice, but it’s important to discuss with your team and decide on a common approach to protect the team harmony.

Still there are some coding preferences people may feel strongly about. For me the use of #region is one of those things. Regions are a way to make your code collapsible in the editor. It’s basically a named hint you place in C# or VB.NET code to set a folding point.

The people who defend the regions say “It helps me organize the code”. This answer drives me crazy “Really?? It helps you organize??”.

Regions doesn’t organize your code; it helps you hide disorganized code. You can think of regions as putting Band-Aid on a cancer wound. Of Course Band-Aid may help in hiding the wound. But that is not going to cure the cancer. Classes, Namespaces and Libraries organizes the code not these pointless little regions. In my view regions can contribute to code smells like long methods, god object etc.

Below are the down sides of region

  • Regions are just glorified comments. It has zero meaning to the compiler
  • Bad codes are usually hidden under regions. People tend to abuse the feature mostly to hide their own shortcomings
  • It’s not readable. By default, VS hides the code under region. Of course you can change that behavior in the options but then comes the question why do you put it in the first place if you don’t want the folding.
  • The Single Responsbility Principle suggests that any unit should have one task and one task only. The fact that you put regions indicates that you are having multiple responsibilities in that module

Is there a good use for regions?

No. The only reason it’s still there is because of the legacy usage. It would be impossible to remove them without breaking the existing code bases.

The fact that the language or the IDE supports a feature doesn’t mean that it should be used daily.