Thursday, November 29, 2018

WPF Rendering

The WPF graphics system uses device-independent units to enable resolution and device independence. Each device independent pixel automatically scales with the system's dots per inch(dpi) setting. UIElement.SnapsToDevicePixels provides WPF applications proper scaling for different dpi settings and makes the application automatically dpi-aware.

There are two system factors that determine the size of text and graphics on your screen: resolution and DPI. Resolution describes the number of pixels that appear on the screen. As the resolution gets higher, pixels get smaller, causing graphics and text to appear smaller. A graphic displayed on a monitor set to 1024 x 768 will appear much smaller when the resolution is changed to 1600 x 1200. 

The other system setting, DPI, describes the size of a screen inch in pixels. Most Windows systems have a DPI of 96, which means a screen inch is 96 pixels. Increasing the DPI setting makes the screen inch larger; decreasing the DPI makes the screen inch smaller. This means that a screen inch isn't the same size as a real-world inch; on most systems, it's probably not. As you increase the DPI, DPI-aware graphics and text become larger because you've increased the size of the screen inch. Increasing the DPI can make text easier to read, especially at high resolutions. 

Interview

1. How WPF render the controls?
2. If there are two buttons on a window form and you want to convert one button into textbox at click of second button, what will you do?
3. If there are two user controls, each controls have button on click of each button you want to togetger both button.
4. If there is a list of control names and you want to  create all controls. How will you do that?

5. Difference between sealed and Singleton.
Ans : There are two main difference
a. More than object can be created of sealed class
b. Singleton can be inherited by nested class while nested class can not inherit by nested  nor outer class.

6.What is explicit interface implementation?
Ans : In explicit interface implementation you have to define method with interface name.

Interface1.Add()
{

}

7. What are the benefit of routed event?

Large Object Heap

.NET’s Garbage Collector (GC) implements many performance optimizations. One of them, the generational model assumes that young objects die quickly, whereas old live longer. This is why managed heap is divided into three Generations. We call them Gen 0 (youngest), Gen 1 (short living) and Gen 2 (oldest). New objects are allocated in Gen 0. When GC tries to allocate a new object and Gen 0 is full, it performs the Gen 0 cleanup. So it performs a partial cleanup (Gen 0 only)! It is traversing the object’s graph, starting from the roots (local variables, static fields & more) and marks all of the referenced objects as living objects.
This is the first phase, called “mark”. This phase can be nonblocking, everything else that GC does is fully blocking. GC suspends all of the application threads to perform next steps!
Living objects are being promoted (most of the time moved == copied!) to Gen 1, and the memory of Gen 0 is being cleaned up. Gen 0 is usually very small, so this is very fast. In a perfect scenario, which could be a web request, none of the objects survive. All allocated objects should die when the request ends. So GC just sets the next object pointer to the beginning of Gen 0. After some Gen 0 collections, we get to the situation, when Gen 1 is also full, so GC can’t just promote more objects to it. Then it simply collects Gen 1 memory. Gen 1 is also small, so it’s fast. Anyway, the Gen 1 survivors are being promoted to Gen 2. Gen 2 objects are supposed to be long living objects. Gen 2 is very big and it’s very time-consuming to collect its memory. So garbage collection of Gen 2 is something that we want to avoid. Why? let’s take a look at the following video to find out how the Gen 2 collection can affect user experience:

Large Object Heap (LOH)

When a large object is allocated, it’s marked as Gen 2 object. Not Gen 0 as for small objects. The consequences are that if you run out of memory in LOH, GC cleans up whole managed heap, not only LOH. So it cleans up Gen 0, Gen 1 and Gen 2 including LOH. This is called full garbage collection and is the most time-consuming garbage collection. For many applications, it can be acceptable. But definitely not for high-performance web servers, where few big memory buffers are needed to handle an average web request (read from a socket, decompress, decode JSON & more).

The Solution

The solution is very simple: buffer pooling. Pool is a set of initialized objects that are ready to use. Instead of allocating a new object, we rent it from the pool. Once we are done using it, we return it to the pool. Every large managed object is an array or an array wrapper (string contains a length field and an array of chars). So we need to pool arrays to avoid this problem.
ArrayPool is a high performance pool of managed arrays. You can find it in System.Buffers packageand it’s source code is available on GitHub. It’s mature and ready to use in the production. It targets .NET Stadard 1.1 which means that you can use it not only in your new and fancy .NET Core apps, but also in the existing .NET 4.5.1 apps as well!

Sample

var samePool = ArrayPool<byte>.Shared;
byte[] buffer = samePool.Rent(minLength);
try
{
    Use(buffer);
}
finally
{
    samePool.Return(buffer);
    // don't use the reference to the buffer after returning it!
}

void Use(byte[] buffer) // it's an array

How to use it?

First of all you need to obtain an instance of the pool. You can do in at least three ways:
  • Recommended: use the ArrayPool.Shared property, which returns a shared pool instance. It’s thread safe and all you need to remember is that it has a default max array length, equal to 2^20(1024*1024 = 1 048 576).
  • Call the static ArrayPool.Create method, which creates a thread safe pool with custom maxArrayLength and maxArraysPerBucket. You might need it if the default max array length is not enough for you. Please be warned, that once you create it, you are responsible for keeping it alive.
  • Derive a custom class from abstract ArrayPool and handle everything on your own.
Next thing is to call the Rent method which requires you to specify minimum length of the buffer. Keep in mind, that what Rent returns might be bigger than what you have asked for.
byte[] webRequest = request.Bytes;
byte[] buffer = ArrayPool<byte>.Shared.Rent(webRequest.Length);

Array.Copy(
    sourceArray: webRequest, 
    destinationArray: buffer, 
    length: webRequest.Length); // webRequest.Length != buffer.Length!!
Once you are done using it, you just Return it to the SAME pool. Return method has an overload, which allows you to cleanup the buffer so subsequent consumer via Rent will not see the previous consumer’s content. By default the contents are left unchanged.
Very imporant note from ArrayPool code:
Once a buffer has been returned to the pool, the caller gives up all ownership of the buffer. The reference returned from a given call to Rent must be returned via Return only once.
It means, that the developer is responsible for doing things right. If you keep using the reference to the buffer after returning it to the pool, you are risking unexpected behavior. As far as I know, there is no static code analysis tool that can verify the correct usage (as of today). ArrayPool is part of the corefx library, it’s not a part of the C# language.

Garbage collection is one of premiere features of the .NET managed coding platform. As the platform has become more capable, we’re seeing developers allocate more and more large objects. Since large objects are managed differently than small objects, we’ve heard a lot of feedback requesting improvement. Today’s post is by Surupa Biswas and Maoni Stephens from the garbage collection feature team. — BrandonThe CLR manages two different heaps for allocation, the small object heap (SOH) and the large object heap (LOH). Any allocation greater than or equal to 85,000 bytes goes on the LOH. Copying large objects has a performance penalty, so the LOH is not compacted unlike the SOH. Another defining characteristic is that the LOH is only collected during a generation 2 collection. Together, these have the built-in assumption that large object allocations are infrequent.

Because the LOH is not compacted, memory management is more like a traditional allocator. The CLR keeps a free list of available blocks of memory. When allocating a large object, the runtime first looks at the free list to see if it will satisfy the allocation request. When the GC discovers adjacent objects that died, it combines the space they used into one free block which can be used for allocation. Because a lot of interaction with the free list takes place at the time of allocation, there are tradeoffs between speed and optimal placement of memory blocks.

A condition known as fragmentation can occur when nothing on the free list can be used. This can result in an out-of-memory exception despite the fact that collectively there is enough free memory. For developers who work with a lot of large objects, this error condition may be familiar. We’ve received a lot of feedback requesting for a solution to LOH fragmentation.

A Better LOH Allocator
In .NET 4.5, we made two improvements to the large object heap. First, we significantly improved the way the runtime manages the free list, thereby making more effective use of fragments. Now the memory allocator will revisit the memory fragments that earlier allocation couldn’t use. Second, when in server GC mode, the runtime balances LOH allocations between each heap. Prior to .NET 4.5, we only balanced the SOH. We’ve observed substantial improvements in some of our LOH allocation benchmarks as a result of both changes.

We’re also starting to collect telemetry about how the LOH is used. We’re tracking how often out-of-memory conditions in managed applications are due to LOH fragmentation. We’ll use this data to measure and improve memory management of real-world applications.

What is High Cohesion

High cohesion is when you have a class that does a well defined job. Low cohesion is when a class does a lot of jobs that don't have much in common.
Let's take this example:
You have a class that adds two numbers, but the same class creates a window displaying the result. This is a low cohesive class because the window and the adding operation don't have much in common. The window is the visual part of the program and the adding function is the logic behind it.
To create a high cohesive solution, you would have to create a class Window and a class Sum. The window will call Sum's method to get the result and display it. This way you will develop separately the logic and the GUI of your application

Wednesday, November 28, 2018

Difference Between flatMap and SwitchMap

In switch map previous subscription automatically cancelled when new subscription started. But in flatmap previous subscription not unsubscribe automatically.

Example Flatapm
const startButton = document.getElementById('start');

const startObs = Rx.Observable.fromEvent(startButton, 'click');
const intervalObs = Rx.Observable.interval(1000);

startObs
  .flatMap((evt) => intervalObs)
  .subscribe((x) => console.log(x));
 
//sample output
//0....1....2....3..0.4..1.5..2.6..

Example Swithcmap

const startButton = document.getElementById('start');

const startObs = Rx.Observable.fromEvent(startButton, 'click');
const intervalObs = Rx.Observable.interval(1000);

startObs
  .switchMap((evt) => intervalObs)
  .subscribe((x) => console.log(x));
 
//sample output
//0....1....2....3..0....1....2....3....4



https://medium.com/@kevinle/difference-between-flatmap-and-switchmap-explained-without-words-6d877cad1d60

Difference Between MergeMap and ConcatMap


Concatmap is sequential do not subscribe another if another is in progress.

Mergemap is parallel, subscribe all inner observable at home nce and enit values as soon as one comlete.


Example :

const series1$ = of('a', 'b');

const series2$ = of('x', 'y');

const result$ = concat(series1$, series2$);

result$.subscribe(console.log);

Output : a ,b, x, y

Example Merge

const series1$ = interval(1000).pipe(map(val => val*10));

const series2$ = interval(1000).pipe(map(val => val*100));

const result$ = merge(series1$, series2$);


result$.subscribe(console.log);

Output 

0
0
10
100
20
200
30
300
https://blog.angular-university.io/rxjs-higher-order-mapping/

Switchmap

SwitchMap cancelled the previous inner subscription before subscribing the new.

Let’s say you have a simple task. You have a button, and when you click on it, you need to start an interval. Let’s see how we would implement this the simple way.

const button = document.querySelector('button');

Observable.fromEvent(button, 'click').subscribe(event => {

 Observable.interval(1000).subscribe(num => {
    console.log(num);
 });

});

We need to subscribe to the fromEvent() observable, which internally adds a new click event listener to our button. When the user clicks on the button, we need to subscribe to the interval() observable, which internally invokes the native JS setInterval() function.

When you click on button

0
1
2
3

While it works fine, there are two drawbacks to the code above.
1. It’s starting to look like callback hell.
2. We need to handle the disposal of every subscription by ourselves.

Let’s see how higher order observables make things easier for us.

A higher order observable is just a fancy name for an observable that emits observable. Let’s change the example a little bit so you can see what I’m talking about.


const click$ = Observable.fromEvent(button, 'click');
const interval$ = Observable.interval(1000);

const clicksToInterval$ = click$.map(event => {
  return interval$;
});

clicksToInterval$.subscribe(intervalObservable => console.log(intervalObservable));

When the user clicks on the button, we leverage the map() operator to return an interval() observable to the stream.

When we subscribe, the clicks$ observable will next() an interval observable.


the Output will be
IntervalObservable{}
IntervalObservable{}
IntervalObservable{}

You may notice that, in this case, we never invoke the interval. In contrast, in the first example, we saw the numbers running in the console.

That’s because we never called subscribe() on our interval$ observable. Remember that observables are lazy — if we want to pull a value out of an observable, we must subscribe().

clicksToInterval$.subscribe(intervalObservable$ => {

   intervalObservable$.subscribe(num => {
     console.log(num);
   });

});


mergeAll
When the inner observable emits, let me know by merging the value to the outer observable.
Under the hood, the mergeAll() operator basically does what we did in the last example. It takes the inner observable, subscribes to it, and pushes the value to the observer.

function myMergeMap(innerObservable) {

  /** the click observable, in our case */
  const source = this;

  return new Observable(observer => {
    source.subscribe(outerValue => {
 
      /** innerObservable — the interval observable, in our case */
      innerObservable(outerValue).subscribe(innerValue => {
        observer.next(innerValue);
      });
   
   });
  });
 }

 Observable.prototype.myMergeMap = myMergeMap;

From the above code, we can learn that each time we click on the button, we are invoking the subscribe() method of the inner interval() observable — which leads to multiple independent intervals in our page.

If this is what you’re after, you are good to go. But, if you want to cancel the previous subscriptions and keep only one, you’ll need the switch() operator.


switch
Like mergeMap() but when the source observable emits cancel any previous subscriptions of the inner observable.
As the name suggests, switch() switches to the new subscription and cancels the previous one.

If we change our code to switch() and click on the button multiple times, we’ll see that each time we click we are given a new interval and the previous one is canceled.



https://medium.com/@juliapassynkova/switchmap-in-details-b98d34145311

https://netbasal.com/understanding-mergemap-and-switchmap-in-rxjs-13cf9c57c885

Tuesday, November 27, 2018

Interview

1. If you have multiple subscriber how will you unsuscribe all in one go.
Ans using takeUntil

2. Difference between Angular 4 and Angular 5.
Ans : https://interview-preparation-for-you.blogspot.com/2018/12/difference-between-angular-4-and.html

3. What are communication ways in angular?
Ans: using @Output, @Input(), @ViewChild() and using Service

4. What is new in ES-6?
Ans : https://interview-preparation-for-you.blogspot.com/2018/12/what-is-new-in-es6.html

5. What is Flex in CSS?
6. Difference between localstorage and sessionStorage.
Ans : https://interview-preparation-for-you.blogspot.com/2018/12/difference-between-localstorage-and.html

7. What are types of binding in angular?
Ans There are four types of binding
a. Property b. Event c. String Interpolation d. Two way data binding

8. What do map, filter function array?
Ans: map manipulate every element of array, filter filter the array and return only elements that satisfy the condition.

9. Difference between promise and Observable
Ans : https://interview-preparation-for-you.blogspot.com/2018/02/difference-between-promise-and.html

10. Difference between MergeMap and ForkJoin
Ans : https://interview-preparation-for-you.blogspot.com/2018/12/difference-between-mergemap-and-forkjoin.html

11. What is hosting in java script?
Ans: In java script function and variable declaration are pulled up. It can be avoided using iffy or using let variable.

PubSub design patterns vs observer pattern


Let’s assume that you are searching for a job as a software engineer and very interested in a company named ‘Banana Inc.’. So, you contacted their Hiring Manager and gave him your contact number. He ensured you that if there is any vacancy they will let you know. And there are several other candidates interested too, like you. They will let all of the candidates know about the vacancy and maybe if you respond then they will conduct an interview. So, how is this scenario related to ‘Observer’ design pattern? Here, the company ‘Banana Inc.’ is the Subject which is maintaining a list of all the Observers (candidates like you) and will notify the observers for a certain event ‘vacancy’. Ain’t it easy, mate?

In ‘Publisher-Subscriber’ pattern, senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers.

This means that the publisher and subscriber don’t know about the existence of one another. There is a third component, called broker or message broker or event bus, which is known by both the publisher and subscriber, which filters all incoming messages and distributes them accordingly.

Let’s list out the differences as a quick Summary:
  • In the Observer pattern, the Observers are aware of the Subject, also the Subject maintains a record of the Observers. Whereas, in Publisher/Subscriber, publishers and subscribers don’t need to know each other. They simply communicate with the help of message queues or broker.
  • In Publisher/Subscriber pattern, components are loosely coupled as opposed to Observer pattern.
  • Observer pattern is mostly implemented in a synchronous way, i.e. the Subject calls the appropriate method of all its observers when some event occurs. The Publisher/Subscriber pattern is mostly implemented in an asynchronous way (using message queue).
  • Observer pattern needs to be implemented in a single application address space. On the other hand, the Publisher/Subscriber pattern is more of a cross-application pattern.

https://hackernoon.com/observer-vs-pub-sub-pattern-50d3b27f838c

Thursday, November 22, 2018

Automapper

Automapper used to copy data from one object to another.

When we have to copy properties from viewmodel to model than same properties needed in model also in that case Autompper come in light.

The real problem arises when we have 25-30 fields in a record from the database and we need to repeat this same binding code all over in the project.

AutoMapper not only reduces the effort but it also limits the execution time that has been taken by such a large number of lines to execute.

Life without Automapper

 class MainClass
    {
        private void CopyData()
        {
            Person objPerson = new Person();
            Student objStudent = new Student();

            objPerson.FirstName = "Khaleek";
            objPerson.LastName = "Ahmad";

            objStudent.FirstName = objPerson.FirstName;
            objStudent.LastName = objPerson.LastName;
        }
    }

    class Student
    {
        public String FirstName  { get; set; }
        public String LastName { get; set; }
    }

    class Person
    {
        public String FirstName { get; set; }
        public String LastName { get; set; }
    }

Life with Automapper



Wednesday, November 21, 2018

Types of dependency Injection lifetime dependency Injection


We define the lifetime when we register the service.
There are three ways, by which you can do that, which in turn decides how the lifecycle of the services are managed.
  1. Transient: New instance is created, every time you create a service
  2. Scoped: New instance is generated for every scope. ( Each request is a Scope). Within the scope, service is reused.
  3. Singleton: Service is created only once, and used everywhere.

Register the Transient Service

Now, under ConfigureServices method of the startup class register the SomeServive via ITransientServiceinterface as shown below.

https://www.tektutorialshub.com/dependency-injection-lifetime-transient-singleton-scoped/

Dependency Injection

Partial View

Partial view is a reusable view, which can be used as a child view in multiple other views. It eliminates duplicate coding by reusing same partial view in multiple places. You can use the partial view in the layout view, as well as other content views.

If a partial view will be shared with multiple views of different controller folder then create it in the Shared folder, otherwise you can create the partial view in the same folder where it is going to be used.

Render Partial View
You can render the partial view in the parent view using html helper methods: Partial() or RenderPartial() or RenderAction(). Each method serves different purposes. Let's have an overview of each method and then see how to render partial view using these methods.

Html.Partial()
@Html.Partial() helper method renders the specified partial view. It accept partial view name as a string parameter and returns MvcHtmlString. It returns html string so you have a chance of modifing the html before rendering.


http://www.tutorialsteacher.com/mvc/partial-view-in-asp.net-mvc

Types of Caching

There are three types of Caching
  1. We can use Page Output Caching for those pages whose content is relatively static. So rather than generate a page on each user request, we can cache the page using page output caching so that it can be accessed from the cache itself. Pages can be generated once and then cached for subsequent fetches. Page output caching allows the entire content of a given page to be stored in the cache.
  2. Page Fragment Caching: ASP.NET provides a mechanism for caching portions of pages, called page fragment caching. To cache a portion of a page, you must first encapsulate the portion of the page you want to cache into a user control. In the user control source file, add an OutputCache directive specifying the Duration and VaryByParam attributes. When that user control is loaded into a page at runtime, it is cached, and all subsequent pages that reference that same user control will retrieve it from the cache
  3. Data Caching: Caching data can dramatically improve the performance of an application by reducing database contention and round-trips. Simply, data caching stores the required data in cache so that the web server will not send requests to the DB server every time for each and every request, which increases web site performance. I'd also add that you can also store user data in this cache provided you are aware of the limitations (the length of time the data is available for, for example) as well as data from many other kinds of data store.
For data caching ASP .Net provides a cache object 

e.g cache["data"] =dsState

What is Jenkins

Elastic Search

Saturday, November 17, 2018

ViewChildren ContentChildren

Content and View
*The children element which are located inside of its template of a component are called view children **
**elements which are used between the opening and closing tags of the host element of a given component are called content children **

its very much clear from this diagram and both are child of todo-app component so they will be considered as @ViewChildren

and app-footer declared inside will be replaced with definition





This means that todo-input and todo-item could be considered view children of todo-app, and app-footer (if it is defined as Angular component or directive) could be considered as a content child.

Content Child can be accessed using ContentChild and ViewChild can be accessed using ViewChild.


https://medium.com/@tkssharma/understanding-viewchildren-viewchild-contentchildren-and-contentchild-b16c9e0358e

Union in Typescript

TypeScript 1.4 gives programs the ability to combine one or two types. Union types are a powerful way to express a value that can be one of the several types. Two or more data types are combined using the pipe symbol (|) to denote a Union Type. In other words, a union type is written as a sequence of types separated by vertical bars.

Syntax: Union literal

Type1|Type2|Type3 

Example: Union Type Variable

var val:string|number 
val = 12 
console.log("numeric value of val "+val) 
val = "This is a string" 
console.log("string value of val "+val)
In the above example, the variable’s type is union. It means that the variable can contain either a number or a string as its value.
On compiling, it will generate following JavaScript code.
//Generated by typescript 1.8.10
var val;
val = 12;
console.log("numeric value of val " + val);
val = "This is a string";
console.log("string value of val " + val);
Its output is as follows −
numeric value of val  12 
string value of val this is a string 

Example: Union Type and function parameter

function disp(name:string|string[]) { 
   if(typeof name == "string") { 
      console.log(name) 
   } else { 
      var i; 
      
      for(i = 0;i<name.length;i++) { 
         console.log(name[i])
      } 
   } 
} 
disp("mark") 
console.log("Printing names array....") 
disp(["Mark","Tom","Mary","John"])

Followers

Link