Wednesday, January 23, 2019

Recursive map

let map = (fn,list) => !list.length ? []:
  [fn(list[0])].concat(map(fn,list.slice(1)));
1
map(x=>x+1,[1,2,3,4)) // => [2,3,4,5]


Another way using spread operator

map = (fn, [head, ...tail]) => (
 head === undefined ? [] : [fn(head), ...map(fn, tail)]
);

Tuesday, January 22, 2019

BDD

We discussed how TDD is test centered development process in which we start writing tests firsts. Initially these tests fails but as we add more application code these tests pass. This helps us in many ways
  • We write application code based on the tests. This gives a test first environment for development and the generated application code turns out to be bug free.
  • With each iteration we write tests and as a result with each iteration we get an automated regression pack. This turns out to be very helpful because with every iteration we can be sure that earlier features are working.
  • These tests serve as documentation  of application behavior and reference for future iterations.

Behavior Driven Development

Behavior Driven testing is an extension of TDD. Like in TDD in BDD also we write tests first and the add application code. The major difference that we get to see here are
  • Tests are written in plain descriptive English type grammar
  • Tests are explained as behavior of application and are more user focused
  • Using examples to clarify requirements
This difference brings in the need to have a language which can define, in an understandable format.

Features of BDD

  1. Shifting from thinking in “tests” to thinking in “behavior”
  2. Collaboration between Business stakeholders, Business Analysts, QA Team and developers
  3. Ubiquitous language, it is easy to describe
  4. Driven by Business Value
  5. Extends Test Driven Development (TDD) by utilizing natural language that non technical stakeholders can understand
  6. BDD frameworks such as Cucumber or JBehave are an enabler, acting a “bridge” between Business & Technical Language
BDD is popular and can be utilised for Unit leveltest cases and for UI level test cases. Tools like RSpec (for Ruby) or in .NET something like MSpec or SpecUnit is popular for Unit Testing following BDD approach.  Alternatively, you can write BDD-style specifications about UI interactions. Assuming you’re building a web application, you’ll probably use a browser automation library like WatiR/WatiN or Selenium, and script it either using one of the frameworks I just mentioned, or a given/when/then tool such as Cucumber (for Ruby) or SpecFlow (for .NET).

BDD Tools Cucumber & SpecFlow

What is Cucumber?

Cucumber is a testing framework which supports Behavior Driven Development (BDD). It lets us define application behavior in plain meaningful English text using a simple grammar defined by a language called Gherkin. Cucumber itself is written in Ruby, but it can be used to “test” code written in Ruby or other languages including but not limited to JavaC# and Python.

What is SpecFlow?

SpecFlow is inspired by Cucumber framework in the Ruby on Rails world. Cucumber uses plain English in the Gherkin format to express user stories. Once the user stories and their expectations are written, the Cucumber gem is used to execute those stores. SpecFlow brings the same concept to the .NET world and allows the developer to express the feature in plain English language. It also allows to write specification in human readable Gherkin format.

Why BDD Framework?

Let’s assume there is a requirement from a client for an E-Commerce website to increase the sales of the product with implementing some new features on the website. The only challenge of the development team is to convert the client idea in to something that actually delivers the benefits to client.
The original idea is awesome. But the only challenge here is that the person who is developing the idea is not the same person who has this idea. If the person who has the idea happens to be a talented software developer, then we might be in luck: the idea could be turned into working software without ever needing to be explained to anyone else. Now the idea needs to be communicated and has to travel from Business Owners(Client) to the development teams or many other people.
Most software projects involve teams of several people working collaboratively together, so high-quality communication is critical to their success. As you probably know, good communication isn’t just about eloquently describing your ideas to others; you also need to solicit feedback to ensure you’ve been understood correctly. This is why agile software teams have learned to work in small increments, using the software that’s built incrementally as the feedback that says to the stakeholders “Is this what you mean?”
Below image is the example of what clients have in their mind and communicated to the team of developers and how developers understands it and work on it.

Wrong Perception

Preception
With the help of Gherkin language cucumber helps facilitate the discovery and use of a ubiquitous language within the team. Tests written in cucumber directly interact with the development code, but the tests are written in a language that is quite easy to understand by the business stakeholders. Cucumber test removes many misunderstandings long before they create any ambiguities in to the code.

Example of a Cucumber/SpecFlow/BDD Test:

The main feature of the Cucumber is that it focuses on Acceptance testing. It made it easy for anyone in the team to read and write test and with this feature it brings business users in to the test process, helping teams to explore and understand requirements.
Feature: Sign up
Sign up should be quick and friendly.
Scenario: Successful sign up
New users should get a confirmation email and be greeted
personally by the site once signed in.
Given I have chosen to sign up
When I sign up with valid details
Then I should receive a confirmation email
And I should see a personalized greeting message
Scenario: Duplicate email
Where someone tries to create an account for an email address
that already exists.
Given I have chosen to sign up
But I enter an email address that has already registered
Then I should be told that the email is already registered
And I should be offered the option to recover my password

Now after a look on the above example code anybody can understand the working of the test and what it is intend to do. It gives an unexpected powerful impact by enabling people to visualize the system before it has been built. Any of the business user would read and understand the test and able to give you the feedback that whether it reflects their understanding of what the system should do, and it can even leads to thinking of other scenarios that needs to be consider too.


Monday, January 21, 2019

Unit of Work


Unit of Work in the Repository Pattern

Unit of Work is referred to as a single transaction that involves multiple operations of insert/update/delete and so on kinds. To say it in simple words, it means that for a specific user action (say registration on a website), all the transactions like insert/update/delete and so on are done in one single transaction, rather then doing multiple database transactions. This means, one unit of work here involves insert/update/delete operations, all in one single transaction.

To understand this concept, consider the following implementation of the Repository Pattern using a non-generic repository, for a Customer entity.

Repository Pattern

The code above seems to be fine. The issue arises when we add a repository for another entity, say Order. In that case, both repositories will generate and maintain their own instance of the DbContext. This may lead to issues in the future, since each DbContext will have its own in-memory list of changes of the records, of the entities, that are being added/updated/modified, in a single transaction/operation. In such a case, if the SaveChanges of one of the repository fails and other one succeeds, it will result in database in-consistency. This is where the concept of UnitOfWork is relevant.

To avoid this, we will add another layer or intermediate between the controller and the Customer repository. This layer will act as a centralized store for all the repositories to receive the instance of the DbContext. This will ensure that, for a unit of transaction, that spans across multiple repositories, should either complete for all entities or should fail entirely, as all of them will share the same instance of the DbContext. In our above example, while adding data for the Order and Customer entities, in a single transaction, both will use the same DbContext instance. This situation, without and with Unit of work, can be represented as in the following :

Unit of Work

In the above representation, during a single operation, that involves Customer and Order entities, both of them use the same DbContext instance. This will ensure that even if one of them breaks, the other one is also not saved, thus maintaining the database consistency. So when SaveChanges is executed, it will be done for both of the repositories.

Let us implement this concept in our example. We add a new class called UnitOfWork and this class will receive the instance of the DbContext. The same class will further generate the required repository instances, in other words repository instances for Order and Customer and pass the same DbContext to both the repositories. So our UnitOfWork will be like the following:

UnitOfWork

And, our Customer Repository will be changed, to receive the instance of DbContext from the unit of work class. See the code below:

Repository Pattern

Similarly, we can have the code for the Order repository. Finally, our controller code will be like the following :



Here, both the Order and Customer repository use the same instance of DbContext and we are executing the save changes using the instance unit of work class. So the changes of a single transaction are either done for both or none.

Friday, January 18, 2019

Difference Between Union and Union All

UNION
The UNION command is used to select related information from two tables, much like the JOIN command. However, when using the UNION command all selected columns need to be of the same data type. With UNION, only distinct values are selected.

The following points need to be considered when using the UNION operator:

The number of columns and sequence of columns must be the same in all queries
The data types must be compatible.

UNION ALL
The UNION ALL command is equal to the UNION command, except that UNION ALL selects all values.

The difference between Union and Union all is that Union all will not eliminate duplicate rows, instead it just pulls all rows from all tables fitting your query specifics and combines them into a table.

Table 1

First
Second
Third
Fourth
Fifth

Second Table

First
Third
Fifth


Select * from Table1 Union All Select * from Table2

First
Second
Third
Fourth
Fifth
First
Third
Fifth


Select * from Table1 Union Select * from Table2

First
Third
Fifth
Second
Fourth

Sunday, January 6, 2019

Onion Architecture

Introduction: What is Onion Architecture?

The problel with tradional architechture used in ASP .Net was tight coupling and separation of concerns.

Onion Architecture was introduced by Jeffrey Palermo to provide a better way to build applications in perspective of better testability, maintainability, and dependability.

Onion Architecture addresses the challenges faced with 3-tier and n-tier architectures, and to provide a solution for common problems. Onion architecture layers interact to each other by using the Interfaces.

Principles
Onion Architecture is based on the inversion of control principle.

Onion Architecture is comprised of multiple concentric layers interfacing each other towards the core that represents the domain. The architecture does not depend on the data layer as in classic multi-tier architectures, but on the actual domain models.

Problem and Solution
As per traditional architecture, the UI layer interacts to business logic, and business logic talks to the data layer, and all the layers are mixed up and depend heavily on each other. In 3-tier and n-tier architectures, none of the layers are independent; this fact raises a separation of concerns. Such systems are very hard to understand and maintain. The drawback of this traditional architecture is unnecessary coupling.


The layers of Onion Architecture

Onion Architecture solved these problem by defining layers from the core to the Infrastructure. It applies the fundamental rule by moving all coupling towards the center. This architecture is undoubtedly biased toward object-oriented programming, and it puts objects before all others. At the center of Onion Architecture is the domain model, which represents the business and behavior objects. Around the domain layer are other layers, with more behaviors.

Layers of Onion Architecture
Onion Architecture uses the concept of layers, but they are different from 3-tier and n-tier architecture layers. Let's see what each of these layers represents and should contain.

Domain Layer
At the center part of the Onion Architecture, the domain layer exists; this layer represents the business and behavior objects. The idea is to have all of your domain objects at this core. It holds all application domain objects. Besides the domain objects, you also could have domain interfaces. These domain entities don't have any dependencies. Domain objects are also flat as they should be, without any heavy code or dependencies.

Repository Layer
This layer creates an abstraction between the domain entities and business logic of an application. In this layer, we typically add interfaces that provide object saving and retrieving behavior typically by involving a database. This layer consists of the data access pattern, which is a more loosely coupled approach to data access. We also create a generic repository, and add queries to retrieve data from the source, map the data from data source to a business entity, and persist changes in the business entity to the data source.

Services Layer
The Service layer holds interfaces with common operations, such as Add, Save, Edit, and Delete. Also, this layer is used to communicate between the UI layer and repository layer. The Service layer also could hold business logic for an entity. In this layer, service interfaces are kept separate from its implementation, keeping loose coupling and separation of concerns in mind.

UI Layer
It's the outer-most layer, and keeps peripheral concerns like UI and tests. For a Web application, it represents the Web API or Unit Test project. This layer has an implementation of the dependency injection principle so that the application builds a loosely coupled structure and can communicate to the internal layer via interfaces.

Implementation of Onion Architecture
No direction is provided by the Onion Architecture guidelines about how the layers should be implemented. The architect should decide the implementation and is free to choose whatever level of class, package, module, or whatever else is required to add in the solution.

Benefits and Drawbacks of Onion Architecture
Following are the benefits of implementing Onion Architecture:

Onion Architecture layers are connected through interfaces. Implantations are provided during run time.
Application architecture is built on top of a domain model.
All external dependency, like database access and service calls, are represented in external layers.
No dependencies of the Internal layer with external layers.
Couplings are towards the center.
Flexible and sustainable and portable architecture.
No need to create common and shared projects.
Can be quickly tested because the application core does not depend on anything.
A few drawbacks of Onion Architecture as follows:

Not easy to understand for beginners, learning curve involved. Architects mostly mess up splitting responsibilities between layers.
Heavily used interfaces
Conclusion
Onion Architecture is widely accepted in the industry. It's very powerful and closely connected to two other architectural styles—Layered and Hexagonal.
The internal layers never depend on external layer. The code that may have changed should be part of an external layer.

Source : https://www.codeguru.com/csharp/csharp/cs_misc/designtechniques/understanding-onion-architecture.html


https://blog.thedigitalgroup.com/understanding-onion-architecture

Saturday, January 5, 2019

Benefit of Repository Pattern

The Repository pattern makes it easier to test your application logic

The Repository pattern allows you to easily test your application with unit tests. Remember that unit tests only test your code, not infrastructure, so the repository abstractions make it easier to achieve that goal.
As noted in an earlier section, it's recommended that you define and place the repository interfaces in the domain model layer so the application layer, such as your Web API microservice, doesn't depend directly on the infrastructure layer where you've implemented the actual repository classes. By doing this and using Dependency Injection in the controllers of your Web API, you can implement mock repositories that return fake data instead of data from the database. This decoupled approach allows you to create and run unit tests that focus the logic of your application without requiring connectivity to the database.
Connections to databases can fail and, more importantly, running hundreds of tests against a database is bad for two reasons. First, it can take a long time because of the large number of tests. Second, the database records might change and impact the results of your tests, so that they might not be consistent. Testing against the database isn't a unit test but an integration test. You should have many unit tests running fast, but fewer integration tests against the databases.
In terms of separation of concerns for unit tests, your logic operates on domain entities in memory. It assumes the repository class has delivered those. Once your logic modifies the domain entities, it assumes the repository class will store them correctly. The important point here is to create unit tests against your domain model and its domain logic. Aggregate roots are the main consistency boundaries in DDD.
The repositories implemented in eShopOnContainers rely on EF Core’s DbContext implementation of the Repository and Unit of Work patterns using its change tracker, so they don’t duplicate this functionality.

The difference between the Repository pattern and the legacy Data Access class (DAL class) pattern

A data access object directly performs data access and persistence operations against storage. A repository marks the data with the operations you want to perform in the memory of a unit of work object (as in EF when using the DbContext class), but these updates aren't performed immediately to the database.
A unit of work is referred to as a single transaction that involves multiple insert, update, or delete operations. In simple terms, it means that for a specific user action, such as a registration on a website, all the insert, update, and delete operations are handled in a single transaction. This is more efficient than handling multiple database transactions in a chattier way.
These multiple persistence operations are performed later in a single action when your code from the application layer commands it. The decision about applying the in-memory changes to the actual database storage is typically based on the Unit of Work pattern. In EF, the Unit of Work pattern is implemented as the DbContext.
In many cases, this pattern or way of applying operations against the storage can increase application performance and reduce the possibility of inconsistencies. It also reduces transaction blocking in the database tables, because all the intended operations are committed as part of one transaction. This is more efficient in comparison to executing many isolated operations against the database. Therefore, the selected ORM can optimize the execution against the database by grouping several update actions within the same transaction, as opposed to many small and separate transaction executions.

Friday, January 4, 2019

Repository Pattern

The repository pattern is used to create an abstraction layer between the DAL (data access layer) and the BAL (business access layer) to perform CRUD operations

If an application does not follow the Repository Pattern, it may have the following problems:

Duplicate database operations codes
Need of UI to unit test database operations and business logic
Need of External dependencies to unit test business logic
Difficult to implement database caching, etc.
Using the Repository Pattern has many advantages:

Your business logic can be unit tested without data access logic;
The database access code can be reused;
Your database access code is centrally managed so easy to implement any database access policies, like caching;
It’s easy to implement domain logic;
Your domain entities or business entities are strongly typed with annotations;

We have an entity model

public class PersonModel
{
    public string Name { get; set; }
    public int Age { get; set; }
}

The service that is loading a person out of the database is ICompanyLogic. It consists of the following method definition.

public interface ICompanyLogic
{
    PersonModel GetPersonByName(string name);
}

The implementation of the ICompanyLogic is handled by CompanyLogic.

public class CompanyLogic: ICompanyLogic
{
    private IPersonDataContext _personDataContext;
    public PersonService(IPersonDataContext personDataContext)
    {
        _personDataContext= personDataContext;
    }

    public PersonModel GetPersonByName(string name)
    {
        using(var ctx = _personDataContext.NewContext())
        {
            var person = ctx.People.First(p => p.Name.Equals(name));
            return person;
        }
    }
}


So far, this isn't so bad. We have a business service CompanyLogic that can retrieve a single person from the database.

But then we have a new requirement that says we also need a way to load a company from another database. So we need to add a new method and extend CompanyLogic.

CompanyModel represents the model stored in the company database.

public class CompanyModel
{
    public string Name { get; set; }
    public int Size { get; set; }
    public bool Public { get; set; }
}

We extend CompanyLogic to have a method that returns a company by name.


public class CompanyLogic: ICompanyLogic
{
    private IPersonDataContext _personDataContext;
    private ICompanyDataContext _companyDataContext;
    public PersonService(IPersonDataContext personDataContext, 
                         ICompanyDataContext companyDataContext)
    {
        _personDataContext= personDataContext;
        _companyDataContext = companyDataContext;
    }

    public PersonModel GetPersonByName(string name)
    {
        using(var ctx = _personDataContext.NewContext())
        {
            var person = ctx.People.First(p => p.Name.Equals(name));
            return person;
        }
    }

    public CompanyModel GetCompanyByName(string companyName)
    {
        using(var ctx = _companyDataContext.NewContext())
        {
            var person = ctx.Company.First(c => c.Name.Equals(companyName));
            return person;
        }
    }
}

Now we are starting to see the problems with this initial solution. Here is a short list of things that are not ideal.

CompanyLogic, knows how to access two different databases.
We have duplicated code with our using statements.
Our logic knows how people and companies are stored.
GetPersonByName and GetCompanyByName cannot be reused without bringing in all of CompanyLogic.
In addition to all of these things, how do we test CompanyLogic in its current state? We have to mock the data context for people and companies to have literal database records.

Implementing Repository Pattern
The repository pattern adds an abstraction layer over the top of data access. 

Let's begin by creating our IPersonRepository interface and its accompanying implementation.

public interface IPersonRepository
{
    PersonModel GetPersonByName(string name);
}

public class PersonRepository: IPersonRepository
{
    private IPersonDataContext _personDataContext;
    public PersonRepository(IPersonDataContext personDataContext)
    {
        _personDataContext= personDataContext;
    }

    public PersonModel GetPersonByName(string name)
    {
        using(var ctx = _personDataContext.NewContext())
        {
            return ctx.People.First(p => p.Name.Equals(name));
        }
    }
}

Then we can do something very similar for companies. We can create the ICompanyRepository interface and its implementation.

public interface ICompanyRepository
{
    PersonModel GetCompanyByName(string name);
}

public class CompanyRepository: ICompanyRepository
{
    private ICompanyDataContext _companyDataContext;
    public CompanyRepository(ICompanyDataContextcompanyDataContext)
    {
        _companyDataContext= personDataContext;
    }

    public CompanyModel GetCompanyByName(string name)
    {
        using(var ctx = _companyDataContext.NewContext())
        {
            return ctx.Company.First(p => p.Name.Equals(name));
        }
    }
}

We now have two separate repositories. PersonRepository knows how to load a given person by name from the person database. CompanyRepository can load companies by name from the company database. Now let's refactor CompanyLogic to leverage these repositories instead of the data contexts.

public class CompanyLogic: ICompanyLogic
{
    private IPersonRepository _personRepo;
    private ICompanyRepository _companyRepo;
    public PersonService(IPersonRepository personRepo, 
                         ICompanyRepository companyRepo)
    {
        _personRepo= personRepo;
        _companyRepo= companyRepo;
    }

    public PersonModel GetPersonByName(string name)
    {
        return _personRepo.GetPersonByName(name);
    }

    public CompanyModel GetCompanyByName(string companyName)
    {
        return _companyRepo.GetCompanyByName(companyName);
    }
}

Look at that, our logic layer no longer knows anything about databases. We have abstracted away how a person and a company are loaded. So what benefits have we gained?

The repository interfaces are reusable. They could be used in other logic layers without changing a thing.
Testing is a lot simpler. We mock the interface response so we can focus on testing our logic.
Database access code for people and companies is centrally managed in one place.
Optimizations can be made at a repository level. The interface is defined and agreed upon. The developer working on the repository can then store data how she sees fit.

 The moral of the story is that data access should be a single responsibility interface. This interface can then be injected into business layers to add any additional logic.

Thursday, January 3, 2019

Tuesday, January 1, 2019

HostListener

The HostListener decorator allows a directive to listen to events on its host element.
We’ll do this by decorating a function on the component with the @HostListener() decoration.

What is Content Projection

Sometimes when we are creating components we want to pass inner markup as an argument to the component. This technique is called content projection.

exportAs in Directive

How to create reference variable for a directive?

The way to reference a component is by using template reference variable. We can reference directives the same way.

In order to give the templates a reference to a directive we use the exportAs attribute. This will allow the host element (or a child of the host element) to define a template variable that references the directive using the #var="exportName" syntax.
Let’s add the exportAs attribute to our directive:

@Directive({
 selector: '[popup]',
 exportAs: 'popup',
 })
 export class PopupDirective {
 @Input() message: String;

 constructor(_elementRef: ElementRef) {
 console.log(_elementRef);
 }

 @HostListener('click') displayMessage(): void {
 alert(this.message);
 }
 }

And now we need to change the two elements to export the template reference:

template: `

 message="Clicked the message">

 Learning Directives


 This should use our Popup diretive




 message="Clicked the alarm icon">

Followers

Link