How to use Redis as your primary database

Recently, I got chance to work with Redis and realized that redis is not just a cache solution it can serve as your primary database. Traditional databases store their data on disk, even though most databases have an embedded cache on RAM to optimize query performances. Most of the time we end up using some caching solution like in memory or Redis to get sub millisecond performances.

It’s easy to conceptualize your tables as redis data structures. For example Hash can serve as your table, Sorted Set can be used to build secondary indexes. Let’s see some of the basic database operations in the context of redis for storing and querying list of employees

Inserting data

You can use Hashmaps to store each record of your table. Each hashmaps will need to be suffixed with an identifier like employees::1

HSET employees::1 name Arivu salary 100000 age 30
ZADD employees::name Arivu:1
ZADD employees::salary 100000:1

HSET employees::2 name Uma salary 300000 age 31
ZADD employees::name Uma:2
ZADD employees::salary 300000:2

HSET employees::3 name Jane salary 100000 age 25
ZADD employees::name Jane:3
ZADD employees::salary 100000:3

HSET employees::4 name Zakir salary 150000 age 28
ZADD employees::name Zakir:4
ZADD employees::salary 150000:4

The above commands will also work for updating the data. It basically creates employees with 4 records while also updating the respective indexes. In the above example we are indexing only two fields. Unlike traditional database in Redis we have to take care of keeping the indexes up to date.

Querying data

If you want to query by the primary key

HGETALL employees::1

If you want to query by secondary indexes. For example lets query by salary > 150000

ZRANGEBYSCORE employees::salary 150000
Output
======
1) "1"
2) "3"
3) "4"

Now you can do a HGETALL for all these ids.

If you want to query using some advanced queries with AND OR logic. I suggest you to explore ZINTERSTORE/ZUNIONSTORE in redis.

Sorting data

Once you know the relevant hashmaps that needs to be return to the client, You can use the SORT function to sort the employees data based on some field

Result returned after querying

1) "1"
2) "3"
3) "4"

Store the results in a SET

SADD result 1 2 3 4

Sort the data

SORT result by employees::*->name ALPHA GET employees::*->name
Output
1) "Arivu"
2) "Jane"
3) "Uma"
4) "Zakir"

Conclusion

As you can see that redis is definitely capable of serving as a primary database and also with Amazon and AWS offers Managed redis instances its even easier to use Redis as your primary datastore.

Hope you find this useful.

Understanding SOLID Principles: Interface Segregation & Dependency Injection

Interface Segregation

The interface is one of the key concept in object oriented programming. Interfaces represent boundaries between what client code requires and how that requirement is implemented. The interface segregation principle states that interfaces should be small.

Because whenever we create an interface every single properties, events and methods needs to be implemented as its entirety. So if we have large interfaces it does not makes sense to expect clients to implement all members irrespective of what the client actually needs

Consider you are creating a streaming application user interface,

public interface IStreamingUser {
   Stream Play(String videoId);
   Stream Download(String videoId);
}

There are different kind of streaming users,

  • RegularUser can only see video
  • Premium user can see as well as download the video
public class RegularUser : IStreamingUser {
  Stream Play(String videoId) {
     // actual code to stream video
  }
  Stream Download(String videoId) {
    throw new NotImplementedException("");
  }
}

public class PremiumUser : IStreamingUser {
  Stream Play(String videoId) {
     // actual code to stream video
  }
  Stream Download(String videoId) {
    // actual code to stream video
  }
}

As you can now see from the above example since the interface is not segregated properly we had to force RegularUser class to implement Download method.

Fixing using interface segregation

interface IPlayable {
 void Play(String id);
}

interface IDownlodable {
 void Download(String id);
}

class RegularUser : IPlayable {
  void Play(String id) {
     //code
  }
}
class PremiumUser : IPlayable,IDownloadable {
  void Play(String id) {
     //code
  }
  void Download(String id) {
     //code
  }
}

Too often, interfaces are large facades behind which huge subystems are hidden, At a certain critical mass, interfaces lose the adaptability that makes them so fundamental to developing solid code

Dependency Injection

Dependency injection (DI) is a very simple concept with a similarly simple implementation However this simplicity belies the importance of the pattern. DI is the glue which ties all the SOLID principles together.

Let’s take a look at below controller class

public class TaskListController 
{
    private readonly ITaskService taskService;
    private readonly IObjectMapper mapper;
    public TaskListController()
    {
         this.taskService = new TaskService();
         this.mapper = new AutoMapper();
    }

}

This problems with above code

  • Not unit testable
  • Hard dependency to taskService and mapper
  • Lack of flexibility in providing alternative service implementations.

Improved design

public class TaskListController 
{
    private readonly ITaskService taskService;
    private readonly IObjectMapper mapper;
    public TaskListController(ITaskService taskService,IObjectMapper mapper)
    {
         this.taskService = taskService;
         this.mapper = mapper;
    }

}

Now you can see we have removed all hard dependencies. Now we can use any tools like DI framework and define the object graph well in advance. Also when unit testing we can mock these constructor arguments.

Conclusion

Hope you guys found this series on SOLID principles useful.

Understanding SOLID Principles: Liskov Substitution

This post is continuation of the SOLID principles series that I have been writing about on SOLID Principles.

Please make sure you read my other blogs on the topic

The Liskov substitution principle (LSP) is a set of rules for creating inheritance hierarchies in which the consumers of these classes can reliably use any class or subclass without breaking their code.

If S is subtype of T, then objects of type T may be replaced with objects of type S, without breaking the program — Barbara Liskov

Let’s look at some practical examples to understand this further. Consider we have a base class called ShippingStrategy and it is being inherited by WorldWideShippingStrategy and it contains the following method

decimal CalculateShippingCost(
float packageWeightInKilograms,
Size<float> packageDimensionsInInches,
RegionInfo destination)

The only thing that this method shares between ShippingStrategy and WorldWideShippingStrategy is its signature. The implementation could be completely different between these two classes.

Let’s see the base class(ShippingStrategy) implementation

public decimal CalculateShippingCost(
float packageWeightInKilograms,
Size<float> packageDimensionsInInches,
RegionInfo destination)
{
if(packageWeightInKilograms <= 0f)
throw new Exception($"{nameof(packageWeightInKilograms)} must be positive and non zero");
     // Actual logic

return default(decimal)
}

As you can see from the above code example the function has a precondition to make sure the packageWeightInKilograms is always positive and non zero. Preconditions are defined as all the conditions necessary for a method to run reliably and without fault.

Now let’s consider the WorldWideShippingStrategy implementation

public decimal CalculateShippingCost(
float packageWeightInKilograms,
Size<float> packageDimensionsInInches,
RegionInfo destination)
{
if(packageWeightInKilograms <= 0f)
throw new Exception($"{nameof(packageWeightInKilograms)} must be positive and non zero");

if(destination == null)
throw new Exception($"{nameof(destination)}, Destination must be provided")
    // Actual logic

return default(decimal)
}

Now since we have added an additional precondition in this implementation if any consumer using the base type might assume that they can pass null to the destination parameter and if they try to use the WorldWideShippingStrategytheir program would break.

This is the actual problem that LSP principle is trying to address using the following rules

Contract Rules

  • Preconditions cannot be strengthened in a subtype
  • Postconditions cannot be weakened in a subtype
  • Invariants — conditions that must remain true through the lifetime of an object

Variance Rules

  • There must be contravariance of the method arguments in the subtype.
  • There must be covariance of the return types in the subtype
  • No new exceptions can be thrown by the subtype unless they are part of the existing exception hierarchy

Conclusion

Even though LSP might appear to be one of the complex principles among SOLID principles, once we understand the concepts of preconditions, postconditions and variance it becomes easier to grasp.

Hope this helps!!

Understanding SOLID Principles: Open/Closed

As beginners we would have all written code that is quite procedural, irrespective of the language we begin with. Beginners tend to use classes as storage mechanisms for methods, regardless of whether those methods truly belong together. There is no/lack of architecture to the code, and there are very few extension points. Any change in the requirement will result in modifying the existing code which could result in regression.

In our previous part we have seen Single Responsibility Principle, which talked about god object and how you should refactor it for clarity. In this post let’s see about Open/Closed principle.

The name Open/Close principle may sound like a oxymoron. But lets look at the definition from Meyer

Software entities should be open for extension, but closed for modification

Bertrand Meyer

Open for extension – This means that the behavior of the module can be extended. As the requirements of the application change, we are able to extend the module with new behaviors that satisfy those changes, In other words, we are able to change what the module does.

Close for modification – Extending the behavior of a module does not result in changes to the sources or binary code of the module. The binary executable version of the module, whether in a linkable library, a DLL, or a Java .jar, remains untouched.

Extension Points

Classes that honor the OCP should be open to extension by containing defined extension points where future functionality can hook into the existing code and provide new behaviors.

If you looked at the code sample that from the Single Responsibility Principle the snippet that you see before refactoring is an example for no extension code.

If you allow changes to existing code there is a higher chance of regression and also when you change an existing interface it will have an impact on the client.

We can provide extension code using following concepts

  • Virtual Methods
  • Abstract Methods
  • Interface

Virtual Methods

If we mark one of the member of class as virtual it becomes an extension. This type of extension is handled via inheritance. When your requirement for an existing class changes, you can just subclass the existing class and without modifying its source code you can change the behavior to satisfy new requirement

Abstract Methods

Abstract is one another OOPS concept which we can use to provide extension points. By declaring a member as abstract you are leaving the implementation details to the inheriting class. Unlike virtual here we are not overriding an existing implementation, but rather delegating the implementation to sub class.

Interface inheritance

The final type of extension point is interface inheritance. Here, the clients dependency on a class is replaced with the interface. Unlike other two methods when it comes to interface all the implementation details are client specific thus offer much more flexible.

Also this helps to keep inheritance hierarchies shallow, with few layers of subclassing.

Closed for change

Design and document for inheritance or else prohibit it

Joshua Bloch

If you are using inheritance then you must be aware that any class can be inherited and can be added with new functionality. But if we are allowing it we must have proper documentation for the class so as to protect and inform future programmers who extend the class.

If you are not expecting a class to be extended its better to restrict the extension by using the keyword sealed.

Conclusion

Knowing that you add extension point is not sufficient, however. You also need to know when this is applicable. Identify the parts of the requirement that are likely to change or that are particularly troublesome to implement. So depending on the specific scenario the code can be rigid or it can be fluid, with myriad extension points.

Reference

Adaptive Code Via C# – Gary Mclean Hall

Understanding SOLID Principles: Single Responsibility

Agile methodology is not just an alternative to more rigid process like waterfall, but a reaction to them. The aim of agile is to embrace change as the necessary part of the contract between client and developer.

If your code is not adaptive enough, Your process cannot be agile enough

UMAMAHESWARAN

When the sole purpose of agile being adaptability, As developers should strive to ensure that their code is maintainable, readable, tested and more importantly adaptive to change. SOLID is the acronym for a set of practices that, when implemented together makes the code adaptive to change.

Each of these principles is a worthy practice by itself that any software developer would do well to learn. When used in collaboration these patterns give code a completely different structure. Lets explore SRP

Single Responsibility Principle

The single responsibility principle (SRP) instructs developers to write code that has one and only one reason to change. If a class has more than one reason to change, it has more than one responsibility . Classes with more than a single responsibility should be broken down into smaller classes, each of which should have only one responsibility and reason to change.

To achieve single responsibility you have to identify classes that have too many responsibilities and use delegation and abstraction to refactor the code to achieve single responsibility.

What do I mean by one reason to change? Lets look at an example of a TradeProcessor to better explain the problem.

namespace SalesProcessor
{
	public class TradeProcessor
	{
		public void ProcessTrades(Stream stream)
		{
			// read rows
			var lines = new List<string>();
			using (var reader = new StreamReader(stream))
			{
				string line;
				while ((line = reader.ReadLine()) != null)
				{
					lines.Add(line);
				}
			}

			var trades = new List<TradeRecord>();

			var lineCount = 1;
			foreach (var fields in lines.Select(line => line.Split(new[] { ',' })))
			{
				if (fields.Length != 3)
				{
					WriteLine("WARN: Line {0} malformed. Only {1} field(s) found.", lineCount, fields.Length);
					continue;
				}

				if (fields[0].Length != 6)
				{
					WriteLine("WARN: Trade currencies on line {0} malformed: '{1}'", lineCount, fields[0]);
					continue;
				}

				if (!int.TryParse(fields[1], out var tradeAmount))
				{
					WriteLine("WARN: Trade amount on line {0} not a valid integer: '{1}'", lineCount, fields[1]);
				}

				if (!decimal.TryParse(fields[2], out var tradePrice))
				{
					WriteLine("WARN: Trade price on line {0} not a valid decimal: '{1}'", lineCount, fields[2]);
				}

				var sourceCurrencyCode = fields[0].Substring(0, 3);
				var destinationCurrencyCode = fields[0].Substring(3, 3);

				// calculate values
				var trade = new TradeRecord
				{
					SourceCurrency = sourceCurrencyCode,
					DestinationCurrency = destinationCurrencyCode,
					Lots = tradeAmount / LotSize,
					Price = tradePrice
				};

				trades.Add(trade);

				lineCount++;
			}

			using (var connection = new SqlConnection("Data Source=(local);Initial Catalog=TradeDatabase;Integrated Security=True;"))
			{
				connection.Open();
				using (var transaction = connection.BeginTransaction())
				{
					foreach (var trade in trades)
					{
						var command = connection.CreateCommand();
						command.Transaction = transaction;
						command.CommandType = System.Data.CommandType.StoredProcedure;
						command.CommandText = "dbo.insert_trade";
						command.Parameters.AddWithValue("@sourceCurrency", trade.SourceCurrency);
						command.Parameters.AddWithValue("@destinationCurrency", trade.DestinationCurrency);
						command.Parameters.AddWithValue("@lots", trade.Lots);
						command.Parameters.AddWithValue("@price", trade.Price);

						command.ExecuteNonQuery();
					}

					transaction.Commit();
				}
				connection.Close();
			}

			WriteLine("INFO: {0} trades processed", trades.Count);
		}

		private static float LotSize = 100000f;
	}
	internal class TradeRecord
	{
		internal string DestinationCurrency;
		internal float Lots;
		internal decimal Price;
		internal string SourceCurrency;
	}
}


This class is trying to achieve following

  1. It reads every line from a Stream and stores each line in a list of strings.
  2. It parses out individual fields from each line and stores them in a more structured list of Trade-Record instances.
  3. The parsing includes some validation and some logging to the console.
  4. Each TradeRecord is enumerated, and a stored procedure is called to insert the trades into a database

The responsibilities of the TradeProcessor are reading streams, parsing string, validating fields, logging and database insertion. The SRP states that this class should only have single reason to change, However the reality of the TradeProcessor is that it will change under the following circumstances

  • When you decide not to use a Stream for input but instead read the trades from a remote call to a web service.
  • When the format of the input data changes, perhaps with the addition of an extra field indicating the broker for the transaction
  • When the validation rules of the input data change
  • When the way in which you log warnings, errors and information changes. If you are using a hosted web service, writing to the console would not be a viable option.
  • When the database changes in some way — perhaps the insert_trade stored procedure requires a new parameter for the broker, too, or you decide not to store the data in a relation database and opt for document storage or the database is moved behind a web service that you must call.

For each of these changes, this class would have to be modified.

Refactoring for clarity

This class not only has too many responsibilities it has a single method that has too many responsibilities, So first you split this method into multiple methods.

public void ProcessTrades(Stream stream)
{
	var lines = ReadTradeData(stream);
	var trades = ParseTrades(lines);
	StoreTrades(trades);
}

Let’s looks at ReadTradeData,

private IEnumerable<string> ReadTradeData(Stream stream)
{
	var tradeData = new List<string>();
	using (var reader = new StreamReader(stream))
	{
		string line;
		while ((line = reader.ReadLine()) != null)
		{
			tradeData.Add(line);
		}
	}
	return tradeData;
}

This is exactly the same code that we have in the original code, but it simply been encapsulated in a method which returns list of string.

Let’s look at ParseTrades method

This method has changed little from the original implementation because it, too, delegates some tasks to other methods.

private IEnumerable<TradeRecord> ParseTrades(IEnumerable<string> tradeData)
{
	var trades = new List<TradeRecord>();
	var lineCount = 1;
	foreach (var line in tradeData)
	{
		var fields = line.Split(new char[] { ',' });

		if (!ValidateTradeData(fields, lineCount))
		{
			continue;
		}

		var trade = MapTradeDataToTradeRecord(fields);

		trades.Add(trade);

		lineCount++;
	}

	return trades;
}

This method delegates validation and mapping responsibilities to other methods. Without this delegation, this section of the process would still be too complex and it would retain too many responsibilities.

private bool ValidateTradeData(string[] fields, int currentLine)
{
	if (fields.Length != 3)
	{
		LogMessage("WARN: Line {0} malformed. Only {1} field(s) found.", currentLine, fields.Length);
		return false;
	}

	if (fields[0].Length != 6)
	{
		LogMessage("WARN: Trade currencies on line {0} malformed: '{1}'", currentLine, fields[0]);
		return false;
	}

	int tradeAmount;
	if (!int.TryParse(fields[1], out tradeAmount))
	{
		LogMessage("WARN: Trade amount on line {0} not a valid integer: '{1}'", currentLine, fields[1]);
		return false;
	}

	decimal tradePrice;
	if (!decimal.TryParse(fields[2], out tradePrice))
	{
		LogMessage("WARN: Trade price on line {0} not a valid decimal: '{1}'", currentLine, fields[2]);
		return false;
	}

	return true;
}

private void LogMessage(string message, params object[] args)
{
	Console.WriteLine(message, args);
}

private TradeRecord MapTradeDataToTradeRecord(string[] fields)
{
	var sourceCurrencyCode = fields[0].Substring(0, 3);
	var destinationCurrencyCode = fields[0].Substring(3, 3);
	var tradeAmount = int.Parse(fields[1]);
	var tradePrice = decimal.Parse(fields[2]);

	var trade = new TradeRecord
	{
		SourceCurrency = sourceCurrencyCode,
		DestinationCurrency = destinationCurrencyCode,
		Lots = tradeAmount / LotSize,
		Price = tradePrice
	};

	return trade;
}

And finally the StoreTrades method

private void StoreTrades(IEnumerable<TradeRecord> trades)
{
	using (var connection = new System.Data.SqlClient.SqlConnection("Data Source=(local);Initial Catalog=TradeDatabase;Integrated Security=True;"))
	{
		connection.Open();
		using (var transaction = connection.BeginTransaction())
		{
			foreach (var trade in trades)
			{
				var command = connection.CreateCommand();
				command.Transaction = transaction;
				command.CommandType = System.Data.CommandType.StoredProcedure;
				command.CommandText = "dbo.insert_trade";
				command.Parameters.AddWithValue("@sourceCurrency", trade.SourceCurrency);
				command.Parameters.AddWithValue("@destinationCurrency", trade.DestinationCurrency);
				command.Parameters.AddWithValue("@lots", trade.Lots);
				command.Parameters.AddWithValue("@price", trade.Price);

				command.ExecuteNonQuery();
			}

			transaction.Commit();
		}
		connection.Close();
	}

	LogMessage("INFO: {0} trades processed", trades.Count());
}

Now if you compare this with the previous implementation it is a clear improvement. However what we really achieved is more readability. This new code is no way more adaptable than the previous code you still need to change the TradeProcessor class for any of the previously mentioned circumstances. To achieve achieve adaptability you need abstraction.

Refactoring for abstraction

In this step we will introduce several abstractions that will allow us to handle any change request for this class. The next task is to split each responsibility into different classes and place them behind interfaces.

 public class TradeProcessor
    {
        public TradeProcessor(ITradeDataProvider tradeDataProvider, ITradeParser tradeParser, ITradeStorage tradeStorage)
        {
            this.tradeDataProvider = tradeDataProvider;
            this.tradeParser = tradeParser;
            this.tradeStorage = tradeStorage;
        }

        public void ProcessTrades()
        {
            var lines = tradeDataProvider.GetTradeData();
            var trades = tradeParser.Parse(lines);
            tradeStorage.Persist(trades);
        }

        private readonly ITradeDataProvider tradeDataProvider;
        private readonly ITradeParser tradeParser;
        private readonly ITradeStorage tradeStorage;
    }

The TradeProcessor class not looks significantly different from previous implementation. It no longer contains the implementation details for the whole process but instead contains the blueprint for the process. This class models the process of transferring trade data from one format to another. This is its only responsibility, its only concern, and the only reason that this class should change. If the process itself changes, this class will change to reflect it. But if you decide you no longer want to retrieve data from a Stream, log on to the console, or store the trades in a database, this class remains as is.

using System.Collections.Generic;
using System.IO;

using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class StreamTradeDataProvider : ITradeDataProvider
    {
        public StreamTradeDataProvider(Stream stream)
        {
            this.stream = stream;
        }

        public IEnumerable<string> GetTradeData()
        {
            var tradeData = new List<string>();
            using (var reader = new StreamReader(stream))
            {
                string line;
                while ((line = reader.ReadLine()) != null)
                {
                    tradeData.Add(line);
                }
            }
            return tradeData;
        }

        private readonly Stream stream;
    }
}
using System.Collections.Generic;

using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class SimpleTradeParser : ITradeParser
    {
        private readonly ITradeValidator tradeValidator;
        private readonly ITradeMapper tradeMapper;

        public SimpleTradeParser(ITradeValidator tradeValidator, ITradeMapper tradeMapper)
        {
            this.tradeValidator = tradeValidator;
            this.tradeMapper = tradeMapper;
        }

        public IEnumerable<TradeRecord> Parse(IEnumerable<string> tradeData)
        {
            var trades = new List<TradeRecord>();
            var lineCount = 1;
            foreach (var line in tradeData)
            {
                var fields = line.Split(new char[] { ',' });

                if (!tradeValidator.Validate(fields))
                {
                    continue;
                }

                var trade = tradeMapper.Map(fields);

                trades.Add(trade);

                lineCount++;
            }

            return trades;
        }
    }
}
using SingleResponsibilityPrinciple.Contracts;

namespace SingleResponsibilityPrinciple
{
    public class SimpleTradeValidator : ITradeValidator
    {
        private readonly ILogger logger;

        public SimpleTradeValidator(ILogger logger)
        {
            this.logger = logger;
        }

        public bool Validate(string[] tradeData)
        {
            if (tradeData.Length != 3)
            {
                logger.LogWarning("Line malformed. Only {0} field(s) found.", tradeData.Length);
                return false;
            }

            if (tradeData[0].Length != 6)
            {
                logger.LogWarning("Trade currencies malformed: '{0}'", tradeData[0]);
                return false;
            }

            int tradeAmount;
            if (!int.TryParse(tradeData[1], out tradeAmount))
            {
                logger.LogWarning("Trade not a valid integer: '{0}'", tradeData[1]);
                return false;
            }

            decimal tradePrice;
            if (!decimal.TryParse(tradeData[2], out tradePrice))
            {
                logger.LogWarning("Trade price not a valid decimal: '{0}'", tradeData[2]);
                return false;
            }

            return true;
        }
    }
}

Now if you refer the back to the list of circumstances, this new version allows you to implement each one without touching the existing classes.

Examples

Scenario 1: Instead of Stream your business team asks you read data from a web service

Solution: Create new implementation for ITradeDataProvider

Scenario 2: A a new field is added to the data format

Solution: Change the implementation for ITradeDataValidator, ITradeDataMapper and ITradeStorage

Scenario 3: The validation rules changes

Solution: Edit the ITradeDataValidator implementation

Scenario 4: Your architect asks you to use document db instead of relation database

Solution: Create new implementation for ITradeStorage

Conclusion

I hope this blog clears your doubts regarding the SRP and convinces you that by combining abstractions via interfaces and continuous refactoring you can make your code more adaptive while also adhering to the Single Responsibility Principle

Reference

Adaptive Code Via C# – Gary Mclean Hall

When to Use Microservices (And When Not To!)

One of the main problem in our industry is we focus on the tech tool, not the thing that the tech tool let’s you do. So when you say you want to use microservice architecture, first you have to think what’s the thing it’s going to give you?

What can microservices architectures give you?

  • More options to scale up applications
  • Independent deployability
  • Limiting the “blast radius” of failure

Microservices architectures buy you options

James LEwis

One should use microservices as a means to obtain a desired outcome rather than for the sake of using a new technology

Don’t use microservices as a default option

Microservices are not an on/off switch. There is significant cost in committing yourself to a microservice architecture. Once done it is very difficult move away from that architecture without a major rework of your solution.

Some of us feel microservice are suitable for distributed architecture, and it is! But it doesn’t mean a single process monolith architecture is not distributed… but actually it is. For example if you are reading data from a database on a separate computer, other than the webserver, that’s a distributed system as well. In lot of cases this is all you might need.

Top reasons to introduce microservices

  • Zero-downtime independent deployability
  • Isolation of data and of processing around that data
  • Use microservices to reflect the organizational structure

How to avoid a distributed monolith

Sometime when implementing microservices if we end up having to deploy service A if we want to update service B then we are violating one of the important trait of microservices “Independent Deployability”.

Avoid this trap by

  • Create a deployment mechanism
  • Look for patterns
  • Decide if you want to merge those dependent services back together again
  • You might want to look at different ways to slice and dice them

Why strive for independent deployment

It’s easier to limit the impact of each release with microservices. With a microservice architecture, if I am just deploying a single service, I can reduce the scope of that deployment and do it more effectively and efficiently.

As the team size increases it gets exponentially harder to coordinate a deployment

Handling data

A lot of the complexity of breaking complex systems lies in the data. After extracting the microservice your need to understand what part of the the old database this system uses

Handling People

No matter what they tell you, it’s always a people problem

Gerald Weinberg

There has to be a willingness to change as an organization if you want to make the most out of using microservices

Takeways

  1. Our Industry tends to focus on tech instead of the outcome, One should use microservices as a means to obtain a desired outcome rather than for the sake of using a new technology
  2. Microservices shouldn’t be the default option. If you think a service architecture could help, try it with one of the modules from a very simple monolith typology and let it evolve from there
  3. The reasons for using microservices are, Zero-downtime independent deployability, Isolation of data and of processing around that data. Use microservices to reflect the organizational structure

This blog is to basically list of key points that were discussed between Martin Fowler and Sam Newman in a Youtube video.

GOTO CONFERENCES

Thanks for reading 🙏🙏🙏

.NET multi platform app ui(MAUI)

MAUI is the .NET Multi-platform App UI, a framework for building native device applications spanning mobile, tablet, and desktop. Announced at the Microsoft BUILD conference on May 19, .NET MAUI is basically evolution of the Xamarin.Forms tool kit. I hope you are aware how the next versions of .NET are focused on unifying elements that go into .NET to make one .NET. And that includes the UI stacks – including Xamarin! The evolution of Xamarin into .NET means that all UI stacks will be treated equally. And that leads to the .NET Multi-Platform App UI, letting you make a single project that covers Windows, OS/X, IOS and Android.

Read my previous blog post on why Microsoft want to pursue one .NET

Features of .NET MAUI

  • Project structure is simplified into a single project for multiple platforms, with single-click deployment to desktop systems, emulators, simulators or physical devices.
  • Images fonts and translation files can be added to a single project, with native hooks set up automatically. Resources such as fonts images are housed in one location.
  • Access is provided to native, underlying operating systems APIs
  • It supports modern app patterns and supports both the existing MVVM and XAML patterns as well as future capabilities like Model-View-Update (MVC) with C# or even Blazor

MVVM

This is the existing pattern that Xamarin.Forms and UWP apps are currently using

<StackLayout>
    <Label Text="Welcome to .NET MAUI!" />
    <Button Text="{Binding Text}" 
            Command="{Binding ClickCommand}" />
</StackLayout>
public Command ClickCommand { get; }
public string Text { get; set; } = "Click me";
int count = 0;
void ExecuteClickCommand ()
{
    count++;
    Text = $"You clicked {count} times.";
}

MVU

MAUI will enable developers to write fluent C# UI ad implement the increasingly popular Model-View-Update (MVU) pattern. MVU promotes one way flow of data and state management as well as code first development experience that rapidly updates the UI by applying only the necessary changes.

readonly State<int> count = 0;

[Body]
View body() => new StackLayout
{
    new Label("Welcome to .NET MAUI!"),
    new Button(
        () => $"You clicked {count} times.",
        () => count.Value ++)
    )
};

What about Xamarin.Forms?

Xamarin will be there. MAUI is a new Framework. You should not worry about applications you have in Xamarin, as of the information presented at Microsoft Build. In my opinion, the learning curve will be quite simple, and in addition they spoke about some tools to streamline the migration process.

What is the time line?

.NET MAUI will ship in the same six-week cadence as Xamarin.Forms. Xamarin.iOS and Xamarin.Android are set to become part of .NET 6 as .NET for iOS and .NET for Android.

A new, major version of Xamarin.Forms is due later this year, with minor and services releases to follow every six weeks until .NET 6 is generally available in November 2021.

Thanks for reading!!

Thoughts on design patterns

Good design patterns are reusable collaborations between interfaces and classes that can be applied repeatedly in many different scenarios, across projects, platforms, languages and frameworks.

Design patterns were popularized by Gang of Four book almost 25 years ago. however it is still very relevant today even though some of the patterns have been re-classified as anti patterns.

If your design attempts to satisfy everyone you will likely end up satisfying no one

UNKNOWN

Knowing these design patterns is certainly an important step, however to studying the applications of design patterns much more important. Implementing design patterns is much more nuanced and need to consider lot of tiny variants before deciding on the design pattern.

As with most notable practices, design patterns are another theoretical tool that it is better to know than not know. They can be overused and they are not always applicable, sometimes over complicating an otherwise simple solution with an explosion of classes,interface, indirection and abstraction.

In my experience design patterns tend to be either underused or overused. In some projects, there are not enough design patterns and the code suffers from a lack of structure. Other projects apply design patterns too liberally, adding un-accounted abstraction and complexity where the benefit is negligible. The balance is in finding the right places to apply the right patterns.

Note that these patterns and practices, just like all others are merely tools for you to use. deciding when and where to apply any pattern or practice is part of the art of software development. Overuse leads to code that is adaptive, but on too fine-grained a level to be appreciated or useful. Overuse also affects another key facet of code quality: readability. It is far more common for software to be developed in teams than as an individual pursuit. Thus, judiciously selecting when and where to apply each pattern is imperative to ensure that the code remains comprehensible in the future

Process trillions of events per day using C#

Let’s be real, processing trillions of events per day can be challenging in any kind of framework/language. The fact that you can do this using the language that you know, and love can be really tempting.

In my previous job I have worked on a project for handling thousands of business events, in addition to storing the events we wanted the ability to search those events and create analytics for those events. Which can be very challenging especially when you want to scale out to millions/billion events per day.

What is Trill?

Trill is a high performance one pass, in-memory streaming analytics engine. It can handle both real-time and offline data and is based on a temporal data and query model. Trill can be used as a streaming engine or a lightweight in-memory relational engine and as a progressive query processor or early query results on partial data.

Internally trill has been used by developers working on Azure Stream Analytics, Bing ads and even Halo.

So seeing this go open source is really incredible!!

How to get started?

Trill is a single-node engine library, any .NET application, service or platform can easily use Trill and start processing queries.

Let’s see some code

IStreamable<Empty, SensorReading> inputStream;

This is the primary interface for creating streamable operations

Some sample for creating an input stream.

private static IObservable < SensorReading > SimulateLiveData() {
 return ToObservableInterval(HistoricData, TimeSpan.FromMilliseconds(1000));
}

private static IObservable < T > ToObservableInterval < T > (IEnumerable < T > source, TimeSpan period) {
 return Observable.Using(
  source.GetEnumerator,
  it => Observable.Generate(
   default (object),
   _ => it.MoveNext(),
   _ => _,
   _ => {
    Console.WriteLine("Input {0}", it.Current);
    return it.Current;
   },
   _ => period));
}

private static IStreamable < Empty, SensorReading > CreateStream(bool isRealTime) {
 if (isRealTime) {
  return SimulateLiveData()
   .Select(r => StreamEvent.CreateInterval(r.Time, r.Time + 1, r))
   .ToStreamable();
 }

 return HistoricData
  .ToObservable()
  .Select(r => StreamEvent.CreateInterval(r.Time, r.Time + 1, r))
  .ToStreamable();
}

Now that we have a stream of events let’s try to add some logic to validate the events. We need to write a query to detect when a threshold is crossed upwards.

// The query is detecting when a threshold is crossed upwards.
const int threshold = 42;

var crossedThreshold = inputStream.Multicast(
 input => {
  // Alter all events 1 sec in the future.
  var alteredForward = input.AlterEventLifetime(s => s + 1, 1);

  // Compare each event that occurs at input with the previous event.
  // Note that, this one works for strictly ordered, strictly (e.g 1 sec) regular streams.
  var filteredInputStream = input.Where(s => s.Value > threshold);
  var filteredAlteredStream = alteredForward.Where(s => s.Value < threshold);
  return filteredInputStream.Join(
   filteredAlteredStream,
   (evt, prev) => new {
    evt.Time, Low = prev.Value, High = evt.Value
   });
 });

That’s it, now you can just listen to the crossedThreshold event and print the value whether the event occurs.

In the output below you can see that when the threshold is crossed it is being captured and printed at the end.

Conclusion

The best part about Trill is it’s just a library. So it will run within a process on any computer but can spawn multiple threads for parallel processing if configured to do so. To span multiple nodes you can use Orleans or Azure Stream Analytics etc.

Resources