Monday, February 25, 2019

Can we return single value from stored procedure?

There are two ways of returning result sets or data from a Stored Procedure to a calling program, output parameters and return value.

Returning Data Using Output Parameter

If you specify the OUTPUT keyword for a parameter in the procedure definition, the procedure can return the current value of the parameter to the calling program when the procedure exits. To save the value of the parameter in a variable that can be used in the calling program, the calling program must use the OUTPUT keyword when executing the procedure.

Returning Data Using Return Value

A procedure can return an integer value called a return value to indicate the execution status of a procedure. You specify the return value for a procedure using the RETURN statement. As with OUTPUT parameters, you must save the return code in a variable when the procedure is executed to use the return value in the calling program.

With output parameter.

create procedure Out_test3 (@OutValue1 int output,@OutValue2 datetime output,@outValue3 varchar(10) output) 
 
as 
 
begin 
 
set @OutValue1 = 10 
 
set @OutValue2=GETDATE() 
 
set @outValue3='test' 
 
end 
 
declare @x int,@y datetime,@z varchar(10); 
 
exec Out_test3 @OutValue1=@x output,@OutValue2=@y 
 
output,@outValue3=@z output 
 
select @x 'interger',@y 'datetime',@z 'varchar' 

With Return value

create procedure Return_test1(@username varchar(20)) 
as 
 
begin 
 
declare @returnvalue int 
 
insert into testUser(UserName) values(@username) 
 
set @returnvalue=@@ERROR 
Return @returnvalue;
 
end 
 
declare @x int 
 
exec @x=Return_test1 'aaa' 
 
select @x 'Return_value'  

Cursor in SQL Server

WHAT IS CURSOR?
A cursor is a database object which is used to retrieve data from resultset one row at a time.The cursor can be used when the data needs to be updated row by row.

CURSOR LIFE CYCLE

Declaring Cursor
A cursor is declared by defining the SQL statement.

Opening Cursor
A cursor is opened for storing data retrieved from the result set.

Fetching Cursor
When a cursor is opened, rows can be fetched from the cursor one by one or in a block to do data manipulation.

Closing Cursor
The cursor should be closed explicitly after data manipulation.

Deallocating Cursor
Cursors should be deallocated to delete cursor definition and released all the system resources associated with the cursor.

WHY USE CURSORS?

In relational databases, operations are made on a set of rows. For example, a SELECT statement returns a set of rows which is called a result set. Sometimes the application logic needs to work with a row at a time rather than the entire result set at once. This can be done using cursors.

In programming, we use a loop like FOR or WHILE to iterate through one item at a time, the cursor follows the same approach and might be preferred because it follows the same logic.

Cursor Example

The following cursor is defined for retrieving employee_id and  employee_name from Employee table.The FETCH_STATUS value is 0 until there are rows.when all rows are fetched then  FETCH_STATUS becomes 1.

use Product_Database
SET NOCOUNT ON; 

DECLARE @emp_id int ,@emp_name varchar(20), 
    @message varchar(max); 

PRINT '-------- EMPLOYEE DETAILS --------'; 

DECLARE emp_cursor CURSOR FOR   
SELECT emp_id,emp_name 
FROM Employee
order by emp_id; 

OPEN emp_cursor 

FETCH NEXT FROM emp_cursor   
INTO @emp_id,@emp_name 

print 'Employee_ID  Employee_Name'     

WHILE @@FETCH_STATUS = 0 
BEGIN 
    print '   ' + CAST(@emp_id as varchar(10)) +'           '+
        cast(@emp_name as varchar(20))

   
    FETCH NEXT FROM emp_cursor   
INTO @emp_id,@emp_name 
 
END   
CLOSE emp_cursor; 
DEALLOCATE emp_cursor; 

Output
SQL Server

CURSOR REPLACEMENT IN SQL SERVER

There's one replacement for cursors in SQL server joins.

Suppose, we have to retrieve data from two tables simultaneously by comparing primary keys and foreign keys. In these type of problems, cursor gives very poor performance as it processes through each and every column. On the other hand using joins in those conditions is feasible because it processes only those columns which meet the condition. So here joins are faster than cursors.

The following example explains the replacement of cursors through joins.

Suppose, we have two tables ProductTable and Brand Table. The primary key of BrandTable is brand_id which is stored in ProductTable as foreign key brand_id. Now suppose, I have to retrieve brand_name from BrandTable using foreign key brand_id from ProductTable. In these situations cursor programs will be as follows,

use Product_Database
SET NOCOUNT ON; 

DECLARE @brand_id int   
DECLARE @brand_name varchar(20) 


PRINT '--------Brand Details --------'; 

DECLARE brand_cursor CURSOR FOR   
SELECT distinct(brand_id)
FROM ProductTable; 

OPEN brand_cursor 

FETCH NEXT FROM brand_cursor   
INTO @brand_id 

WHILE @@FETCH_STATUS = 0 
BEGIN 
    select brand_id,brand_name from BrandTable where brand_id=@brand_id
--(@brand_id is of ProductTable)
   
    FETCH NEXT FROM brand_cursor   
INTO @brand_id 
 
END   
CLOSE brand_cursor; 
DEALLOCATE brand_cursor;

Output
SQL Server
The same program can be done using joins as follows,

Select distinct b.brand_id,b.brand_name from BrandTable b inner join

ProductTable p on b.brand_id=p.brand_id

Output

SQL Server

Source : https://www.c-sharpcorner.com/article/cursors-in-sql-server/

Converter in WPF

A Value Converter functions as a bridge between a target and a source and it is necessary when a target is bound with one source, for instance you have a text box and a button control. You want to enable or disable the button control when the text of the text box is filled or null.

In this case you need to convert the string data to Boolean. This is possible using a Value Converter. To implement Value Converters, there is the requirement to inherit from IValueConverter in the System.Windows.Data namespace and implement the two methods Convert and ConvertBack.

Note: In WPF, Binding helps to flow the data between the two WPF objects. The bound object that emits the data is called the Source and the other (that accepts the data) is called the Target.

using System; 
using System.Collections.Generic; 
using System.Linq; 
using System.Text; 
using System.Threading.Tasks; 
using System.Windows.Data; 
 
namespace ValueConverters 

   public class ValueConverter:IValueConverter 
    { 
        public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 
        { 
            bool isenable = true; 
            if (string.IsNullOrEmpty(value.ToString())) 
            { 
               isenable = false; 
            } 
            return isenable; 
        } 
        public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) 
        { 
            throw new NotImplementedException(); 
        } 
    } 


I have created a class named Value Converter that will convert the value from the source to the target. This class does the conversion of the WPF value by implementing IValueConverter. There are two methods in the Value Converter class that are implemented from the IValue Converter interface. The Convert method helps to do the conversion from the target to the source and ConvertBack converts from the Source to the Target.

Saturday, February 23, 2019

Some Operator of RxJs

take(n): emits N values before stopping the observable.
takeWhile(predicate): tests the emitted values against a predicate, if it returns `false`, it will complete.
first(): emits the first value and completes.
first(predicate): checks each value against a predicate function, if it returns `true`, the emits that value and completes.
takeUntil(notifier: Observable): Observable - Emit values until provided observable emits.

Friday, February 22, 2019

Docker

A container can be thought of as a really lightweight VM.

Containers are much smaller than VMs by default — a container doesn't contain a full operating system, whereas a VM does. Containers thus require a lot less in terms of processing power in order to run — they tend to be mere megabytes in size and take just seconds to start.

VMs tend to be gigabytes in size and can take minutes to start, because operating systems are big! Containers make use of libraries and packages on the host operating system in order to access resources. They then make use of their own libraries and packages in order to emulate separate operating systems, as needed, instead of installing unnecessary bloat. For example, you could run an Ubuntu image on your Windows machine without needing to install Ubuntu.

The smallness of containers is, of course, really great when it comes to scaling applications. If we were hosting recommend_service as a VM and had a spike in traffic, users would just have to deal with some extra latency and perhaps some errors while we brought extra VMs online. On the other hand, if we were using containers, we would be able to bring them online much faster, and potentially even keep extra instances running, in case of spikes, because containers are cheap.

Docker is the de facto industry standard container platform.

Docker allows the creation of images. Images are instantiated to create containers (if you are familiar with object orientated programming, then images are like classes, and containers are like objects). In this section, we will create and run a container, and the container will contain a service we wish to deploy.

Types of UDF

We three types of user defined functions.

1. Scalar Function
User defined scalar function also returns single value as a result of actions perform by function.

Create function fnGetEmpFullName
(
 @FirstName varchar(50),
 @LastName varchar(50)
)
returns varchar(101)
As
Begin return (Select @FirstName + ' '+ @LastName);
end

2. Inline Table-Valued Function
User defined inline table-valued function returns a table variable as a result of actions perform by function. The value of table variable should be derived from a single SELECT statement.In-line UDFs return a single row or multiple rows and can contain a single SELECT statement.

 --Create function to get employees
Create function fnGetEmployee()
returns Table
As
 return (Select * from Employee)

3. Multi-Statement Table-Valued Function
User defined multi-statement table-valued function returns a table variable as a result of actions perform by function. In this a table variable must be explicitly declared and defined whose value can be derived from a multiple sql statements.

The multi-statement UDFs can contain any number of statements that populate the table variable to be returned. Notice that although you can use INSERT, UPDATE, and DELETE statements against the table variable being returned, a function cannot modify data in permanent tables. Multi-statement UDFs come in handy when you need to return a set of rows, but you can't enclose the logic for getting this rowset in a single SELECT statement.

 --Create function for EmpID,FirstName and Salary of Employee
Create function fnGetMulEmployee()
returns @Emp Table
(
EmpID int,
FirstName varchar(50),
Salary int
)
As
begin
 Insert into @Emp Select e.EmpID,e.FirstName,e.Salary from Employee e;
--Now update salary of first employee
 update @Emp set Salary=25000 where EmpID=1;
--It will update only in @Emp table not in Original Employee table
return
end 
Source

Microservice

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).

Resilience : the capacity to recover quickly from difficulties
Improve fault isolation: Larger applications can remain mostly unaffected by the failure of a single module.

Technology Heterogeneity
Eliminate vendor or technology lock-in: You can try new technolgy with different module.

Ease of Understanding: With added simplicity, developers can better understand the functionality of a service.

Scaling
With a large, monolithic service, we have to scale everything together. One small part of our overall system is constrained in performance, but if that behavior is locked up in a giant monolithic application, we have to handle scaling everything as a piece. With smaller services, we can just scale those services that need scaling, allowing us to run other parts of the system on smaller, less powerful hardware.

Ease of Deployment
A one-line change to a million-line-long monolithic application requires the whole application to be deployed in order to release the change. That could be a large-impact, high-risk deployment. In practice, large-impact, high-risk deployments end up happening infrequently due to understandable fear.

With microservices, we can make a change to a single service and deploy it independently of the rest of the system. This allows us to get our code deployed faster. If a problem does occur, it can be isolated quickly to an individual service, making fast rollback easy to achieve.

Composability
One of the key promises of distributed systems and service-oriented architectures is that we open up opportunities for reuse of functionality. With microservices, we allow for our functionality to be consumed in different ways for different purposes. This can be especially important when we think about how our consumers use our software.

Optimizing for Replaceability : Easy to replace it needed.With our individual services being small in size, the cost to replace them with a better implementation, or even delete them altogether, is much easier to manage.

Deployment of Microservices

The best way to deploy microservices-based applications is within containers, which are complete virtual operating system environments that provide processes with isolation and dedicated access to underlying hardware resources. One of the biggest names in container solutions right now is Docker,


Drawback of Microservices

Large numbers of microservices are difficult to orchestrate.
Understanding, managing and testing dependencies is difficult.
Message flow increases with the number of microservices and hampers performance.
Large numbers of microservices are difficult to secure.
Once requirements change, do you want a microservice for one new feature or re-write and add to an existing service. That is a difficult decision in an architecture with many microservices.

Thursday, February 21, 2019

Interview

1. What are the types of databinding in WPF?
Ans : https://interview-preparation-for-you.blogspot.com/2019/02/types-of-data-binding-in-wpf.html

2. If we have two text boxes and and we set one text box IsNumeric property to true and other to false, How it works while Dependency properties are static.
3. What is converter in WPF?
Ans : https://interview-preparation-for-you.blogspot.com/2019/02/converter-in-wpf.html

4. How can we create converter in WPF?
Ans : https://interview-preparation-for-you.blogspot.com/2019/02/converter-in-wpf.html

5. If we have a readonly text box and a property name attached to its text porperty which mode need to set to reflect changes in textbox while change the property.
Ans OneWayToSource

6. If we have a table in which there is status field with value 0 and 1, we want to change 0 to 1 and 1 to 0.
UPDATE dbo.TestStudents 
SET Status =
( CASE 
WHEN (Status = 1) THEN 0
WHEN (Status = 0) THEN 1
END )

Types of Data Binding in WPF

Types of Binding

OneWay
TwoWay
OneWayToSource
OneTime


One Way

In this type of binding, data is bound to its source i.e. business model to its target i.e. UI. Whenever type anything in text box it assigned to its property.

Two Way

In this, data is getting updated from both sides i.e. source and target. If there is any change from UI, it updates the business model, and vice versa.


OneWayToSource

OneWayToSource is the reverse of OneWay binding; it updates the source property when the target property changes. It is good for read only text box whenever changes in variable it appear in textbox.

OneTime

This has the same behaviour as OneWay except it will only update the user interface one time. This should be your default choice for binding.

Event Loop

“How is JavaScript asynchronous and single-threaded ?” The short answer is that JavaScript language is single-threaded and the asynchronous behaviour is not part of the JavaScript language itself, rather they are built on top of the core JavaScript language in the browser (or the programming environment) and accessed through the browser APIs.



Heap - Objects are allocated in a heap which is just a name to denote a large mostly unstructured region of memory

Stack - This represents the single thread provided for JavaScript code execution. Function calls form a stack of frames (more on this below)

Browser or Web APIs are built into your web browser, and are able to expose data from the browser and surrounding computer environment and do useful complex things with it. They are not part of the JavaScript language itself, rather they are built on top of the core JavaScript language, providing you with extra superpowers to use in your JavaScript code.

Understand EventLoop with Example

function main(){
  console.log('A');
  setTimeout(
    function display(){ console.log('B'); }
  ,0);
console.log('C');
}
main();
// Output
// A
// C
//  B



Here we have the main function which has 2 console.log commands logging ‘A’ and ‘C’ to the console. Sandwiched between them is a setTimeout call which logs ‘B’ to the console with 0ms wait time.





1. The call to the main function is first pushed into the stack (as a frame). Then the browser pushes the first statement in the main function into the stack which is console.log(‘A’). This statement is executed and upon completion that frame is popped out. Alphabet A is displayed in the console.

2. The next statement (setTimeout() with callback exec() and 0ms wait time) is pushed into the call stack and execution starts. setTimeout function uses a Browser API to delay a callback to the provided function. The frame (with setTimeout) is then popped out once the handover to browser is complete (for the timer).

3. console.log(‘C’) is pushed to the stack while the timer runs in the browser for the callback to the exec() function. In this particular case, as the delay provided was 0ms, the callback will be added to the message queue as soon as the browser receives it (ideally).

4. After the execution of the last statement in the main function, the main() frame is popped out of the call stack, thereby making it empty. For the browser to push any message from the queue to the call stack, the call stack has to be empty first. That is why even though the delay provided in the setTimeout() was 0 seconds, the callback to exec() has to wait till the execution of all the frames in the call stack is complete.

5. Now the callback exec() is pushed into the call stack and executed. The alphabet C is display on the console. This is the event loop of javascript.

So the delay parameter in setTimeout(function, delayTime) does not stand for the precise time delay after which the function is executed. It stands for the minimum wait time after which at some point in time the function will be executed.
https://medium.com/front-end-weekly/javascript-event-loop-explained-4cd26af121d4

How to prevent Browser cache on Angular

angular-cli resolves this brilliantly by providing an --output-hashing flag for the build command. Example usage:

ng build --aot --output-hashing=all
Bundling & Tree-Shaking provides some details and context.

Running ng help build, documents the flag:

--output-hashing=none|all|media|bundles (String) Define the output filename cache-busting hashing mode.
aliases: -oh , --outputHashing

--output-hashing all — hash contents of the generated files and append hash to the file name to facilitate browser cache busting (any change to file content will result in different hash and hence browser is forced to load a new version of the file)

Other Way

Found a way to do this, simply add a querystring to load your components, like so:

@Component({
  selector: 'some-component',
  templateUrl: `./app/component/stuff/component.html?v=${new Date().getTime()}`,
  styleUrls: [`./app/component/stuff/component.css?v=${new Date().getTime()}`]
})
This should force the client to load the server's copy of the template instead of the browser's. If you would like it to refresh only after a certain period of time you could use this ISOString instead:

new Date().toISOString() //2016-09-24T00:43:21.584Z
And substring some characters so that it will only change after an hour for example:

new Date().toISOString().substr(0,13) //2016-09-24T00

Or

 just add the following meta tags at the top:



Interview

What is tuple size?
Ans Tuple size 8.

If we have lakhs of records  in database and we are inserting and selecting at same time. 
Can we do that how?
Types of user defined function?
Difference between var and dynamic.
Difference between abstract and interface.
Why we need interface?

Can we use asynch and await separately?
Ans No

What is the use of cursor?
Write a lambda query and Sql query to fetch data from a list/table


Give me some pattern name of structural?
Observer pattern
Composite pattern?
Difference between factory and Abstract factory.
The main difference between a “factory method” and an “abstract factory” is that the factory method is a single method, and an abstract factory is an object.
The factory method is just a method, it can be overridden in a subclass, whereas the abstract factory is an object that has multiple factory methods on it.
The Factory Method pattern uses inheritance and relies on a subclass to handle the desired object instantiation.
What is return type of procedure?
Integer
Can we return any value from procedure?
In SQL Server by using the return values, we can return only one integer. It is not possible, to return more than one value using return values, whereas in output parameters, we can return any data type and a stored procedure can have more than one output parameter.
How to return value from function?
What is the use of view?
Can we update table from view?

Saturday, February 16, 2019

ngContainer and tempateref

Custom Elements v1: Reusable Web Components

why you will not find components inside Angular

Pure pipe Vs Impure Pipe


Lettable variable

With RxJS 5.5 came the introduction of pipeable, or “lettable”, operators.

Those operators are pure functions that can be used as standalone operators instead of methods on an observable.

They’re lightweight, will make your code easily re-usable and can decrease your overall build size.

Previously, we used to do the following for things like filter , map, scan , …


import { from } from 'rxjs/observable/from';

const source$ = from([1, 2, 3, 4, 5, 6, 7, 8, 9]);

source$
  .filter(x => x % 2)
  .map(x => x * 2)
  .scan((acc, next) => acc + next, 0)
  .startWith(0)
  .subscribe(console.log)

Operators as pure functions
Whereas now all those operators are available as pure functions under rxjs/operators.

What does that mean? That means it’s really easy to build custom operators !

Let’s say I know that I’m going to re-use this filter, well, with lettable operators I can just do this:

import { filter, map, c } from 'rxjs/operators';

const filterOutEvens = filter(x => x % 2);
const sum = reduce((acc, next) => acc + next, 0);
const doubleBy = x => map(value => value * x);

So now we want a way to use those operators, how could we do that?

Well, we said those operators are “lettable” that means we can use them by calling the let method on an observable:

import { Observable } from 'rxjs/Rx';
import { filter, map, c } from 'rxjs/operators';

const filterOutEvens = filter(x => x % 2);
const sum = reduce((acc, next) => acc + next, 0);
const doubleBy = x => map(value => value * x);

const source$ = Observable.range(0, 10);

source$.let(filterOutEvens).subscribe(x => console.log(x)); // [1, 3, 5, 7, 9]

And if we want to chain multiple lettable operators we can keep dot chaining:

import { Observable } from 'rxjs/Rx';
import { filter, map, reduce } from 'rxjs/operators';

const filterOutEvens = filter(x => x % 2);
const sum = reduce((acc, next) => acc + next, 0);
const doubleBy = x => map(value => value * x);

const source$ = Observable.range(0, 10);

source$
  .let(filterOutEvens)
  .let(doubleBy(2))
  .let(sum)
  .subscribe(x => console.log(x)); // 50


The pipe method
All this looks cool but its still very verbose. Well, thanks to RxJS 5.5 observables now have a pipe method available on the instances allowing you to clean up the code above by calling pipe with all our pure functions operators:


const { Observable } = require('rxjs/Rx')
const { filter, map, reduce,  } = require('rxjs/operators')
const { pipe } = require('rxjs/Rx')

const filterOutEvens = filter(x => x % 2)
const doubleBy = x => map(value => value * x);
const sum = reduce((acc, next) => acc + next, 0);
const source$ = Observable.range(0, 10)

source$.pipe(
  filterOutEvens,
  doubleBy(2),
  sum)
  .subscribe(console.log); // 50
What does that mean? That means that any operators you previously used on the instance of observable are available as pure functions under rxjs/operators. This makes building a composition of operators or re-using operators becomes really easy, without having to resort to all sorts of programming gymnastics where you have to create a custom observable extending Observable, then overwrite lift just to make your own custom thing.


The Pipe util
Alright, hold up, “but what if I want to re-use this logic in different places ? Do I build a custom .pipe with my operators?” you’ll ask me. Well you don’t even have to do that because, lucky us, RxJS also comes with a standalone pipe utility !

import { Observable, pipe } from 'rxjs/Rx';
import { filter, map, reduce } from 'rxjs/operators';

const filterOutEvens = filter(x => x % 2);
const sum = reduce((acc, next) => acc + next, 0);
const doubleBy = x => map(value => value * x);

const complicatedLogic = pipe(
  filterOutEvens,
  doubleBy(2),
  sum
);

const source$ = Observable.range(0, 10);

source$.let(complicatedLogic).subscribe(x => console.log(x)); // 50

Meaning we can easily compose a bunch of pure function operators and pass them as a single operator to an observable!

https://blog.hackages.io/rxjs-5-5-piping-all-the-things-9d469d1b3f44

RxJS 6 Features

Avoiding takeUntil Leaks

takeUntil

Let’s start with a basic example where we’ll manually unsubscribe from two subscriptions. In this example we’ll subscribe to an Apollo watchQuery to get data from a GraphQL endpoint, and we’ll also create an interval observable that we subscribe to when an onStartInterval method gets called:

import { Component, OnInit, OnDestroy } from '@angular/core';

import { Observable } from 'rxjs/Observable';
import { Subscription } from 'rxjs/Subscription';
import 'rxjs/add/observable/interval';

import { Apollo } from 'apollo-angular';
import gql from 'graphql-tag';

@Component({ ... })
export class AppComponent implements OnInit, OnDestroy {
  myQuerySub: Subscription;
  myIntervalSub: Subscription;

  myQueryObs: Observable;

  constructor(private apollo: Apollo) {}

  ngOnInit() {
    this.myQueryObs = this.apollo.watchQuery({
      query: gql`
        query getAllPosts {
          allPosts {
            title
            description
            publishedAt
          }
        }
      `
    });

    this.myQuerySub = this.myQueryObs.subscribe(({data}) => {
      console.log(data);
    });
  }

  onStartInterval() {
    this.myIntervalSub = Observable.interval(250).subscribe(val => {
      console.log('Current value:', val);
    });
  }

  ngOnDestroy() {
    this.myQuerySub.unsubscribe();

    if (this.myIntervalSub) {
      this.myIntervalSub.unsubscribe();
    }
  }
}
Now imagine that your component has many similar subscriptions, it can quickly become quite a process to ensure everything gets unsubscribed when the component is destroyed.

Unsubscribing Declaratively with takeUntil
The solution is to compose our subscriptions with the takeUntil operator and use a subject that emits a truthy value in the ngOnDestroy lifecycle hook.

The following snippet does the exact same thing, but this time we unsubscribe declaratively. You’ll notice that an added benefit is that we don’t need to keep references to our subscriptions anymore:

import { Component, OnInit, OnDestroy } from '@angular/core';

import { Observable } from 'rxjs/Observable';
import { Subject } from 'rxjs/Subject';
import 'rxjs/add/observable/interval';
import 'rxjs/add/operator/takeUntil';

import { Apollo } from 'apollo-angular';
import gql from 'graphql-tag';

@Component({ ... })
export class AppComponent implements OnInit, OnDestroy {
  destroy$: Subject = new Subject();

  constructor(private apollo: Apollo) {}

  ngOnInit() {
    this.apollo.watchQuery({
      query: gql`
        query getAllPosts {
          allPosts {
            title
            description
            publishedAt
          }
        }
      `
    })
    .takeUntil(this.destroy$)
    .subscribe(({data}) => {
      console.log(data);
    });
  }

  onStartInterval() {
    Observable
    .interval(250)
    .takeUntil(this.destroy$)
    .subscribe(val => {
      console.log('Current value:', val);
    });
  }

  ngOnDestroy() {
    this.destroy$.next(true);
    // Now let's also unsubscribe from the subject itself:
    this.destroy$.unsubscribe();
  }
}
Note that Using an operator like takeUntil instead of manually unsubscribing will also complete the observable, triggering any completion event on the observable. Check your code to make sure this doesn’t create any unintended side effects. Thanks to @gerardsans for pointing this out. This is the same for other similar operators like take, takeWhile and first, which will all complete the observable.

Followers

Link