Estimating Potential Future Exposure with QuantLib and AWS EMR – Part II

In my previous post, we saw how to submit a Pyspark job to AWS EMR cluster. In this post, I will go over the setup of the cluster.

Before we start with the cluster, we must have a certificate keypair (.pem file) and a security group setup in AWS. There are many resources available online for this and I will not go into the details.

Next is selecting the custom AMI that I mentioned in my previous post as a base machine for all nodes in the cluster so that Anaconda Python and QuantLib are already installed on the nodes.  The AMI that I used is publicly available for now, it costs me about $5 per month to keep it alive. The details of the AMI are as shown below.

AMI ID ami-c56a94bf

I selected 1 m3xlarge (4 cores, 15 GB) machine for my master node and 3 r4x8large (16 cores, 244 GB) machines for my worker nodes. I also always select spot pricing for all my nodes.

Once the cluster is up and running (in waiting mode), first you need to secure copy spark-pfe.py file (available at my github repo spark-pfe github repo) to the master node and then you can SSH into master node to run the job using spark-submit script from previous post.

It took about 7 minutes for Pyspark job to complete, it computed netting set NPV for 5000 simulations across future 454 dates for 2 swaps and 1 FxFwd. The output of the simulation was written to S3 bucket and now its time to pull it onto local machine for analysis. Loading the NPV cube on a local machine is fine for demonstration purposes as we have only 5000 simulations but I would load the NPV cube into Amazon Redshift or AuroraDB for production purposes.

We will use boto3 library for downloading the files from S3.

from pylab import *
from boto3.session import Session
from io import StringIO

NSim = 5000 # I know the number of simulations


session = Session(aws_access_key_id='AWS_ACCESS_KEY',
    aws_secret_access_key='AWS_SECRET_ACCESS_KEY')
s3 = session.resource("s3")
s3.Bucket('your-bucket').download_file('output/time-grid/part-00000', 'time_grid')
s3.Bucket('your-bucket').download_file('output/npv_cube/part-00000', 'npv_cube')
#load the files into numpy arrays for analysis
T = np.loadtxt('time_grid')
data_file = open('npv_cube', 'r')
block = ''

npv_cube = np.zeros((NSim,len(T),2))

#quick way to load, I am sure there are better ways
for line in data_file:
    block += line.replace('[','').replace(']','').lstrip()
data_file.close()

npv_cube = np.loadtxt(StringIO(unicode(block)))
#reshape to array with NSim rows, time-grid columns and with each cell having 2 NPVs
npv_cube = npv_cube.reshape(NSim, len(T), 2)

Once we have the time grid and NPV cube in memory, we can do some plots to visualize the simulated exposure paths. The Blue paths are for Collateralized exposures and Red are for Uncollateralized.

#plot 30 simulated exposure paths (out of 5000)
plt.figure(figsize=(7, 5), dpi=300)
for i in range(0, 30):
    plt.plot(T, npv_cube[i,:,0]/1000., 'r') # Uncollateralized
    plt.plot(T, npv_cube[i, :, 1] / 1000., 'b') # Collateralized
plt.xlabel("Time in years")
plt.ylabel("Exposure in Thousands")
plt.title("Simulated Exposure paths")
plt.show()

Now we can calculate and plot the Expected Exposure where we just take the positive exposures from the above simulated exposure paths.

# Calculate the expected exposure
E = npv_cube.copy()
uncoll_exposures = npv_cube[:,:,0]
uncoll_exposures[uncoll_exposures < 0] = 0
uncoll_expected_exposures = np.sum(uncoll_exposures, axis=0) / NSim

coll_exposures = npv_cube[:,:,1]
coll_exposures[coll_exposures < 0] = 0
coll_expected_exposures = np.sum(coll_exposures, axis=0) / NSim

plt.figure(figsize=(7, 5), dpi=300)
plt.plot(T, uncoll_expected_exposures/1000., 'r', label='Uncollateralized')
plt.plot(T, coll_expected_exposures/1000., 'b', label='Collateralized')
plt.xlabel("Time in years")
plt.ylabel("Expected Exposure in Thousands")
plt.title("Expected Exposure")
plt.legend(loc='upper left')

Now we can plot the PFE curves where we take the 95% quantile of above expected exposures.

# Calculate the PFE curve (95% quantile)
uncoll_PFE_curve = np.percentile(uncoll_exposures, 95, axis=0, interpolation='higher')
coll_PFE_curve = np.percentile(coll_exposures, 95, axis=0, interpolation='higher')
plt.figure(figsize=(7,7), dpi=300)
plt.plot(T, uncoll_PFE_curve, 'r', label='Uncollateralized')
plt.plot(T, coll_PFE_curve, 'b', label='Collateralized')
plt.xlabel("Time in years")
plt.ylabel("PFE")
plt.title("PFE Curves")
plt.legend(loc='upper left')

We can now calculate the Maximum PFE for both the curves.

# calculate the maximum pfe
MPFE = np.max(uncoll_PFE_curve)
print 'Maximum Uncollateralized PFE:%f' % MPFE

MPFE = np.max(coll_PFE_curve)
print 'Maximum Collateralized PFE:%f' % MPFE

Maximum Uncollateralized PFE: 260,962.61

Maximum Collateralized PFE: 252,916.08

The spark-submit script that I have used is by no means optimized. I have tried tuning the various spark memory and number of executors parameters in a trial and error approach within a limited time frame I had. The configuration I came up with is optimal for the cluster I used  but I am sure it can be improved.

Thanks for stopping by.

 

Estimating Potential Future Exposure with QuantLib and AWS EMR – Part I

Counterparty risk is the risk that a party to an OTC derivatives contract may fail to perform on its contractual obligations, causing losses to the other party. Credit exposure is the actual loss in the event of a counterparty default.

Some of the ways to reduce counterparty risk :

Netting: Offset positive and negative contract values with the same counterparty reduces exposure to that counterparty

Collateral: Holding cash or securities against an exposure

Central counterparties (CCP): Use a third party clearing house as a counterparty between buyer and seller and post margin

Potential Future Exposure (PFE) is a measure of credit risk and is the worst exposure one could have to a counterparty at a certain time in future with a certain level of confidence. For example, for a PFE of $100,000 with 95% confidence, we expect to have an exposure (loss in case counterparty defaults with no recovery) greater than $100,000 in only 5% of scenarios.

Netting set is a group of OTC trades (could be interest rate swaps, FxFwds or CCS) that are facing the same counterparty. An ISDA CSA agreement with that counterparty defines how an exposure is treated within the netting set but usually we can “net” the exposures of different instruments in the set which reduces the exposure. For example, a positive exposure on a swap could be netted with a negative exposure on FxFwd.

Estimating PFE involves simulating future market risk scenarios, calculating “netted” MtM values of OTC trades that are facing the same counterparty at various dates in future at each scenario and taking only the positive MtMs which represent our exposure to counterparty,  then taking let’s say 95% quantile of the peak exposures.

For an interest rate swap, market risk factor is the underlying forward curve which determines the NPV of the floating leg. For an FxFwd, its the forward interest curves for the two currencies and the forward FX rate.

In this post, to generate future scenarios of the curves, I use Hull-White one factor short rate model which is assumed to be calibrated. There are many excellent resources available online which discuss interest rate models, PFE and QuantLib/Python in general, some of which I have used here are:

Hull White Term Structure Simulations

Expected Exposure and PFE Simulation

Derivatives CVA Calculation

The focus of this post is to provide a proof of concept of estimating PFE on Amazon Web Service’s ElasticMapReduce (EMR). AWS EMR provides a complete platform of Hadoop ecosystem tools to process vast amounts of data in dynamically scalable (hence “elastic”) environment. I will use Apache Spark hosted in EMR cluster to generate future curve simulations and perform NPV calculations on various dates for each of the scenarios. As you can see, for a realistic number of simulations, the number of calculations needed for a netting set comprised of a few swaps will easily exceed the capacity of a single machine.

Challenges and Issues:

But the devil’s always in the details, right? Here are some of the challenges I faced with this whole setup involving Amazon EMR, QuantLib and Python and ways to overcome them:

While developing my Spark PFE “job” to be submitted to EMR cluster, I needed to spin a cluster on demand. Obviously, I needed to terminate it as soon as I was done or else I will continue to be charged for it. This means that, every time I spin a new cluster, I will get fresh EC2 instances with no QuantLib or Anaconda Python installed on them. Now, installing Anaconda Python and compiling Boost and then QuantLib and then QuantLib SWIG wrappers and then making all of it work seamlessly is not for the faint of heart 🙂 But there is a way to solve the EC2 problem. AWS has something called AMI (Amazon Machine Image) which is really a base machine you can build your cluster with. So I spin up an EC2 instance, install/compile all the software I need and then save it as an AMI and use it as base machine for the cluster so that QuantLib/Python is already setup on all nodes from the get go.

However, AWS EMR requires that if you are going to use custom AMI for cluster nodes, it must be based on Amazon’s own Linux AMI. Amazon Linux AMI is loosely based on RHEL/CentOS but it’s not the same. I faced several issues in compiling Boost as well as QuantLib on this version of Linux and all of them were related to the gcc/g++ version that came with it which was really old (4.3) So I upgraded the gcc version to 6.4 and then installed Boost, QuantLib on top of it.

In a very simplified view, Spark is a distributed architecture with a master node and number of worker nodes. All the input, output and intermediate data is stored in-memory as resilient distributed datasets (RDDs). The objects are serialized and are distributed to the worker nodes in order to be processed. However, I quickly realized that QuantLib/Python SWIG objects cannot be serialized (or pickled), so we must re-create QuantLib objects like Swaps, Dates and Curves from pure Python objects on the worker nodes each time we need them.

I also wanted to incorporate collateral movements into my exposure calculations since all bilateral OTC swaps/FxFwds are required to be collateralized and wanted to see how collateral affects PFE. As it turns out, it reduces PFE but doesn’t completely eliminate it due to the margin period of risk (7 days in this post) and asymmetric collateral amounts delivered and posted due to Threshold and MTA requirements. I have referenced Jon Gregory’s Counterparty Credit Risk Second Edition book and spreadsheets for this purpose.

I have also added FxFwd (EUR/USD in this case) to the netting set as we deal with large amounts of FxFwds on a day-to-day basis. But the FxFwd example in this post is not very realistic as there is only one FxFwd that matures in 1 year and will not be replaced. Usually, FxFwds are continually rolled over (also known as TARF) in a portfolio but this one FxFwd really twists the netting set NPV while it has not matured. The way I calculated FxFwd exposure at time t is as follows:

forward points = spot rate(t) – forward rate

YF(t) = year fraction from t until maturity of FxFwd, r_dom(t) = domestic currency zero rate

FxFwd NPV(t) = ((spot rate + fwd points) * FxFwd notional)/( 1 + (r_dom(t) * YF(t)))

FxFwd Exposure(t) = (FxFwd notional * spot rate(t)) – FxFwd NPV(t)

The forward rate is the agreed upon rate and spot rate at time t was obtained from FX rate simulation using Garman-Kohlagen process.

The r_dom(t) rate was obtained from the same yield curve simulation  used for swap MtM calculations.

The main stages of our Spark job are as follows:

The Spark driver program loads the netting set instruments, USD swap curve , EUR swap curve, previous swap fixings (as the swaps are “seasoned” and have been traded a year ago) from Amazon S3 bucket.

We determine the dates we want to estimate the netting set NPV based on a weekly collateral schedule plus any swap reset dates.

We calculate today’s netting set NPV which will be the base from which the simulated NPVs will start from.

We will generate a matrix of normally distributed random numbers using Spark Machine Learning library called MLLib which will return a RDD with random numbers distributed across the cluster in the form of partitions.

The work to compute NPVs (Collateralized and Uncollateralized) will be distributed by Spark to available nodes (containers with executor threads) and will be done in parallel.

At the driver, we wait for all the work to be done and then collect the results of all NPV simulations called “NPV Cube” and write them out to a file on Amazon S3 bucket and terminate the cluster.

Another python script which runs on a local machine (not in cluster) then reads from this S3 bucket using boto3 library and visualizes/computes the final quantile calculation.

Let’s dive into the code. All the code and input files used in this post are available at Spark-PFE github repo

Here are the various input files used in this post. Obviously, the swap curves are not real but they are close to real. Libor fixings are real since the data is public domain.

usd-libor-swap-curve.csv , eur-libor-swap-curve.csv , instruments.csv, USD3MLibor-Fixings.csv

from pyspark import SparkConf, SparkContext
from pyspark.mllib.random import RandomRDDs
import QuantLib as ql
import datetime as dt
import numpy as np
import math
import sys

# Used in loading the various input text files
def is_number(s):
    try:
        float(s)
        return True
    except ValueError:
        return False

# QuantLib date to Python date
def ql_to_datetime(d):
    return dt.datetime(d.year(), d.month(), d.dayOfMonth())

# Python date to QuantLib date
def py_to_qldate(d):
    return ql.Date(d.day, d.month, d.year)

# Loads libor fixings from input file into an RDD and then collects the results
def loadLiborFixings(libor_fixings_file):

    libor_fixings = sc.textFile(libor_fixings_file) \
        .map(lambda line: line.split(",")) \
        .filter(lambda r: is_number(r[1])) \
        .map(lambda line: (str(line[0]), float(line[1]))).cache()

    fixingDates = libor_fixings.map(lambda r: r[0]).collect()
    fixings = libor_fixings.map(lambda r: r[1]).collect()
    return fixingDates, fixings

# Loads input swap specifications from input file into an RDD and then collects the results
def load_swaps(instruments_file):
    swaps = sc.textFile(instruments_file) \
            .map(lambda line: line.split(",")) \
            .filter(lambda r: r[0] == 'IRS') \
            .map(lambda line: (str(line[1]), str(line[2]), float(line[3]), float(line[4]), str(line[5])))\
            .collect()
    return swaps

# Loads input FxFwd specifications from input file into an RDD and then collects the results
def load_fxfwds(instruments_file):
    fxfwds = sc.textFile(instruments_file) \
        .map(lambda line: line.split(",")) \
        .filter(lambda r: r[0] == 'FXFWD') \
        .map(lambda line: (str(line[1]), str(line[2]), float(line[3]), float(line[4]), str(line[5]), str(line[6])))\
        .collect()
    return fxfwds

# Builds a QuantLib swap object from given specification
def makeSwap(today, start, maturity, nominal, fixedRate, index, typ=ql.VanillaSwap.Payer):
    calendar = ql.UnitedStates()
    fixedLegTenor = ql.Period(6, ql.Months)
    floatingLegBDC = ql.ModifiedFollowing
    fixedLegDC = ql.Thirty360(ql.Thirty360.BondBasis)
    spread = 0.0
    settle_date = calendar.advance(start, 2, ql.Days)
    end = calendar.advance(settle_date, maturity, floatingLegBDC)

    fixedSchedule = ql.Schedule(settle_date,
                                end,
                                fixedLegTenor,
                                calendar,
                                ql.ModifiedFollowing, ql.ModifiedFollowing,
                                ql.DateGeneration.Forward, False)
    floatSchedule = ql.Schedule(settle_date,
                                end,
                                index.tenor(),
                                index.fixingCalendar(),
                                index.businessDayConvention(),
                                index.businessDayConvention(),
                                ql.DateGeneration.Forward,
                                False)
    swap = ql.VanillaSwap(typ,
                          nominal,
                          fixedSchedule,
                          fixedRate,
                          fixedLegDC,
                          floatSchedule,
                          index,
                          spread,
                          index.dayCounter())
    return swap, [index.fixingDate(x) for x in floatSchedule if index.fixingDate(x) >= today][:-1]

Below is the main method invoked by the spark-submit driver program.

# main method invoked by Spark driver program
def main(sc, args_dict):
    # Broadcast dictionary obejct, which will hold various pure python objects
    # needed by the executors
    # QuantLib SWIG wrappers cannot be pickled and sent to the executotrs
    # so we use broadcast variables in Spark to pass pure python obejcts
    broadcast_dict = {}
    pytoday = dt.datetime(2017, 9, 8)
    broadcast_dict['python_today'] = pytoday
    today = ql.Date(pytoday.day, pytoday.month,pytoday.year)
    ql.Settings.instance().evaluationDate = today

    usd_calendar = ql.UnitedStates()
    usd_dc = ql.Actual365Fixed()
    eurusd_fx_spot = ql.SimpleQuote(1.1856)
    broadcast_dict['eurusd_fx_spot'] = eurusd_fx_spot.value()
    output_dir = args_dict['output_dir']
    instruments_file = args_dict['instruments_file']
    libor_fixings_file = args_dict['libor_fixings_file']

    # Loads USD libor swap curve from input file
    usd_swap_curve = sc.textFile(args_dict['usd_swap_curve_file']) \
        .map(lambda line: line.split(",")) \
        .filter(lambda r: is_number(r[1])) \
        .map(lambda line: (str(line[0]), float(line[1]))) \
        .cache()
    usd_curve_dates = usd_swap_curve.map(lambda r: r[0]).collect()
    usd_disc_factors = usd_swap_curve.map(lambda r: r[1]).collect()
    broadcast_dict['usd_curve_dates'] = usd_curve_dates
    broadcast_dict['usd_disc_factors'] = usd_disc_factors
    usd_crv_today = ql.DiscountCurve([ql.DateParser.parseFormatted(x,'%Y-%m-%d')
                                      for x in usd_curve_dates] , usd_disc_factors, usd_dc, usd_calendar )
    usd_disc_term_structure = ql.RelinkableYieldTermStructureHandle(usd_crv_today)

    # Loads EUR libor swap curve from input file
    eur_swap_curve = sc.textFile(args_dict['eur_swap_curve_file']) \
        .map(lambda line: line.split(",")) \
        .filter(lambda r: is_number(r[1])) \
        .map(lambda line: (str(line[0]), float(line[1]))) \
        .cache()
    eur_curve_dates = eur_swap_curve.map(lambda r: r[0]).collect()
    eur_disc_factors = eur_swap_curve.map(lambda r: r[1]).collect()
    broadcast_dict['eur_curve_dates'] = eur_curve_dates
    broadcast_dict['eur_disc_factors'] = eur_disc_factors

    # Build QuantLib index object
    usdlibor3m = ql.USDLibor(ql.Period(3, ql.Months), usd_disc_term_structure)
    # don't need EUR fixings since we don't have a EUR swap
    fixingDates, fixings = loadLiborFixings(libor_fixings_file)
    fixingDates = [ql.DateParser.parseFormatted(r, '%Y-%m-%d') for r in fixingDates]
    three_month_old_date = usd_calendar.advance(today, -90, ql.Days, ql.ModifiedFollowing)
    latestFixingDates = fixingDates[fixingDates.index(three_month_old_date):]
    latestFixings = fixings[fixingDates.index(three_month_old_date):]
    usdlibor3m.addFixings(latestFixingDates, latestFixings)
    broadcast_dict['fixing_dates'] = [ql_to_datetime(x) for x in latestFixingDates]
    broadcast_dict['fixings'] = latestFixings

    swaps = load_swaps(instruments_file)
    broadcast_dict['swaps'] = swaps
    fxfwds = load_fxfwds(instruments_file)
    broadcast_dict['fxfwds'] = fxfwds

    swaps = [
                makeSwap(today, ql.DateParser.parseFormatted(swap[0], '%Y-%m-%d'),
                     ql.Period(swap[1]), swap[2], swap[3], usdlibor3m,
                     ql.VanillaSwap.Payer if swap[4] == 'Payer' else ql.VanillaSwap.Receiver)
                for swap in swaps
            ]

    longest_swap_maturity = max([s[0].maturityDate() for s in swaps])
    broadcast_dict['longest_swap_maturity'] = ql_to_datetime(longest_swap_maturity)

    Nsim = int(args_dict['NSim'])
    NPartitions = int(args_dict['NPartitions'])
    a = float(args_dict['a']) #0.376739
    sigma = float(args_dict['sigma']) #0.0209835
    broadcast_dict['a'] = a
    broadcast_dict['sigma'] = sigma

    # Simulate swap NPVs until we reach the longest maturity
    years_to_sim = math.ceil(ql.Actual360().yearFraction(today,longest_swap_maturity))

    dates = [today + ql.Period(i, ql.Weeks) for i in range(0, 52 * int(years_to_sim))]
    # Very important to add swap reset dates to our universe of dates
    for idx in xrange(len(swaps)):
        dates += swaps[idx][1]
    dates = np.unique(np.sort(dates))
    broadcast_dict['dates'] = [ql_to_datetime(x) for x in dates]

    br_dict = sc.broadcast(broadcast_dict)

    # Write out the time grid to a text file which can be parsed later
    T = [ql.Actual360().yearFraction(today, dates[i]) for i in xrange(1, len(dates))]
    temp_rdd = sc.parallelize(T)
    # coalesce with shrink the partition size to 1 so we have only 1 file to write and parse later
    temp_rdd.coalesce(1).saveAsTextFile(output_dir +'/time-grid')

    # the main routine, generate a matrix of normally distributed random numbers and
    # spread them across the cluster
    # we are not setting the number of partitions here as the default parallelism should be enough
    randArrayRDD = RandomRDDs.normalVectorRDD(sc, Nsim, len(T), seed=1L)

    # for each row of the matrix, which corresponds to one simulated path,
    # compute netting set NPV (collateralized and uncollateralized)
    npv_cube = randArrayRDD.map(lambda p: (calc_exposure(p,T,br_dict)))
    # write out the npv cube, remove '[' and ']' added by numpy array to ease parsing later
    npv_cube.coalesce(1).saveAsTextFile(output_dir + '/npv_cube')

Below is the method that will be executed on the executors in parallel on the EMR cluster.

def calc_exposure(rnumbers, time_grid, br_dict):

    # get hold of broadcasted dictionary and rebuild the curves, libor index, swaps and FxFwd
    # we need to do this since QuantLib SWIG objects cannot be passed around

    broadcast_dict = br_dict.value
    today = py_to_qldate(broadcast_dict['python_today'])
    ql.Settings.instance().setEvaluationDate(today)
    usd_calendar = ql.UnitedStates()
    usd_dc = ql.Actual365Fixed()
    eur_calendar = ql.TARGET()
    eur_dc = ql.ActualActual()

    maturity = 10
    a = broadcast_dict['a']
    sigma = broadcast_dict['sigma']
    dates = [py_to_qldate(x) for x in broadcast_dict['dates']]
    longest_swap_maturity = py_to_qldate(broadcast_dict['longest_swap_maturity'])
    fixing_dates = [py_to_qldate(x) for x in broadcast_dict['fixing_dates']]
    fixings = broadcast_dict['fixings']

    usd_curve_dates = broadcast_dict['usd_curve_dates']
    usd_disc_factors = broadcast_dict['usd_disc_factors']
    usd_t0_curve = ql.DiscountCurve([ql.DateParser.parseFormatted(x, '%Y-%m-%d')
                        for x in usd_curve_dates], usd_disc_factors, usd_dc, usd_calendar)
    usd_t0_curve.enableExtrapolation()
    usd_disc_term_structure = ql.RelinkableYieldTermStructureHandle(usd_t0_curve)

    eur_curve_dates = broadcast_dict['eur_curve_dates']
    eur_disc_factors = broadcast_dict['eur_disc_factors']
    eur_t0_curve = ql.DiscountCurve([ql.DateParser.parseFormatted(x, '%Y-%m-%d')
                                     for x in eur_curve_dates], eur_disc_factors, eur_dc, eur_calendar)
    eur_t0_curve.enableExtrapolation()
    eur_disc_term_structure = ql.RelinkableYieldTermStructureHandle(eur_t0_curve)

    eurusd_fx_spot = broadcast_dict['eurusd_fx_spot']
    flat_vol_hyts = ql.BlackVolTermStructureHandle(ql.BlackConstantVol(today, usd_calendar, 0.20, usd_dc))
    usdlibor3m = ql.USDLibor(ql.Period(3, ql.Months), usd_disc_term_structure)
    usdlibor3m.addFixings(fixing_dates, fixings)
    engine = ql.DiscountingSwapEngine(usd_disc_term_structure)
    swaps = broadcast_dict['swaps']
    swaps = [
        makeSwap(today, ql.DateParser.parseFormatted(swap[0], '%Y-%m-%d'),
                 ql.Period(swap[1]), swap[2], swap[3], usdlibor3m,
                 ql.VanillaSwap.Payer if swap[4] == 'Payer' else ql.VanillaSwap.Receiver)
        for swap in swaps
    ]

    # I know there is only one fxfwd in the instruments file
    fxfwds = broadcast_dict['fxfwds']
    fx_startdate = ql.DateParser.parseFormatted(fxfwds[0][0], '%Y-%m-%d')
    fx_tenor = ql.Period(fxfwds[0][1])
    fx_notional = fxfwds[0][2]
    fwd_rate = fxfwds[0][3]
    fx_maturity = usd_calendar.advance(fx_startdate, fx_tenor, ql.Following)

    # define the NPV cube array
    # number of rows = number of dates in time-grid
    # number of columns = 2 (collateralized and uncollateralized NPV)
    npvMat = np.zeros((len(time_grid),2))

    # utility method to calc FxFwd exposure
    def fxfwd_exposure(date, spot, usd_curve):
        usd_zero_rate = usd_curve.zeroRate(usd_dc.yearFraction(date, fx_maturity),
                                           ql.Compounded, ql.Annual).rate()
        yf = usd_dc.yearFraction(date, fx_maturity)
        fwd_points = fwd_rate - spot
        fxfwd_npv = ((spot + fwd_points) * fx_notional) / (1 + (usd_zero_rate * yf))
        fxfwd_exp = (fx_notional * eurusd_fx_spot) - fxfwd_npv
        return fxfwd_exp

    # Intialize NPV cube with today's NPVs
    nettingset_npv = 0.
    for idx in xrange(len(swaps)):
        swaps[idx][0].setPricingEngine(engine)
        nettingset_npv += swaps[idx][0].NPV()
    fxfwd_exp = fxfwd_exposure(today, eurusd_fx_spot, usd_t0_curve)
    nettingset_npv += fxfwd_exp
    npvMat[0] = nettingset_npv

    # Hull White parameter estimations
    def gamma(t):
        forwardRate = usd_t0_curve.forwardRate(t, t, ql.Continuous, ql.NoFrequency).rate()
        temp = sigma * (1.0 - np.exp(- a * t)) / a
        return forwardRate + 0.5 * temp * temp

    def B(t, T):
        return (1.0 - np.exp(- a * (T - t))) / a

    def A(t, T):
        forward = usd_t0_curve.forwardRate(t, t, ql.Continuous, ql.NoFrequency).rate()
        value = B(t, T) * forward - 0.25 * sigma * B(t, T) * sigma * B(t, T) * B(0.0, 2.0 * t)
        return np.exp(value) * usd_t0_curve.discount(T) / usd_t0_curve.discount(t)

    usd_rmat = np.zeros(shape=(len(time_grid)))
    usd_rmat[:] = usd_t0_curve.forwardRate(0,0, ql.Continuous, ql.NoFrequency).rate()

    eur_rmat = np.zeros(shape=(len(time_grid)))
    eur_rmat[:] = eur_t0_curve.forwardRate(0, 0, ql.Continuous, ql.NoFrequency).rate()

    spotmat = np.zeros(shape=(len(time_grid)))
    spotmat[:] = eurusd_fx_spot

    # CSA terms for Collateral movements
    IA1 = 0; IA2 = 0; threshold1 = 50000.0; threshold2 = 50000.0; MTA1 = 25000.0; MTA2 = 25000;rounding = 5000.0
    collateral_held = IA2 - IA1
    collateral_posted = 0.0

    # the main loop of NPV computations
    for iT in xrange(1, len(time_grid)):
        mean = usd_rmat[iT - 1] * np.exp(- a * (time_grid[iT] - time_grid[iT - 1])) + \
               gamma(time_grid[iT]) - gamma(time_grid[iT - 1]) * \
                                        np.exp(- a * (time_grid[iT] - time_grid[iT - 1]))

        var = 0.5 * sigma * sigma / a * (1 - np.exp(-2 * a * (time_grid[iT] - time_grid[iT - 1])))
        rnew = mean + rnumbers[iT-1] * np.sqrt(var)
        usd_rmat[iT] = rnew
        # USD discount factors as generated by HW model
        usd_disc_factors = [1.0] + [A(time_grid[iT], time_grid[iT] + k) *
                                np.exp(- B(time_grid[iT], time_grid[iT] + k) * rnew) for k in xrange(1, maturity + 1)]

        mean = eur_rmat[iT - 1] * np.exp(- a * (time_grid[iT] - time_grid[iT - 1])) + \
               gamma(time_grid[iT]) - gamma(time_grid[iT - 1]) * \
                                      np.exp(- a * (time_grid[iT] - time_grid[iT - 1]))

        var = 0.5 * sigma * sigma / a * (1 - np.exp(-2 * a * (time_grid[iT] - time_grid[iT - 1])))
        rnew = mean + rnumbers[iT - 1] * np.sqrt(var)
        eur_rmat[iT] = rnew
        # EUR discount factors as generated by HW model
        eur_disc_factors = [1.0] + [A(time_grid[iT], time_grid[iT] + k) *
                                    np.exp(- B(time_grid[iT], time_grid[iT] + k) * rnew) for k in xrange(1, maturity + 1)]

        if dates[iT] > longest_swap_maturity:
            break

        # very important to reset the valuation date
        ql.Settings.instance().setEvaluationDate(dates[iT])
        crv_date = dates[iT]
        crv_dates = [crv_date] + [crv_date + ql.Period(k, ql.Years) for k in xrange(1, maturity + 1)]
        # use the new disc factors to build a new simulated curve
        usd_crv = ql.DiscountCurve(crv_dates, usd_disc_factors, ql.Actual365Fixed(), ql.UnitedStates())
        usd_crv.enableExtrapolation()
        eur_crv = ql.DiscountCurve(crv_dates, eur_disc_factors, ql.ActualActual(), ql.TARGET())
        eur_crv.enableExtrapolation()
        usd_disc_term_structure.linkTo(usd_crv)
        eur_disc_term_structure.linkTo(eur_crv)
        usdlibor3m.addFixings(fixing_dates, fixings)
        swap_engine = ql.DiscountingSwapEngine(usd_disc_term_structure)

        # build Garman-Kohlagen process for FX rate simulation for FxFwd
        gk_process = ql.GarmanKohlagenProcess(ql.QuoteHandle(ql.SimpleQuote(eurusd_fx_spot)),
                                              usd_disc_term_structure, eur_disc_term_structure, flat_vol_hyts)
        dt = time_grid[iT] - time_grid[iT - 1]
        spotmat[iT] = gk_process.evolve(time_grid[iT - 1], spotmat[iT - 1], dt, rnumbers[iT - 1])
        nettingset_npv = 0.
        for s in range(len(swaps)):
            if usdlibor3m.isValidFixingDate(dates[iT]):
                fixing = usdlibor3m.fixing(dates[iT])
                usdlibor3m.addFixing(dates[iT], fixing)
            swaps[s][0].setPricingEngine(swap_engine)
            nettingset_npv += swaps[s][0].NPV()

        if dates[iT] <= fx_maturity:
            fxfwd_exp = fxfwd_exposure(dates[iT], spotmat[iT], usd_crv)
            nettingset_npv += fxfwd_exp

        # we have uncollateralized netting set NPV
        npvMat[iT,0] = nettingset_npv
        prev_exposure = npvMat[iT-1,1]
        print 'prev_exposure:%f' % prev_exposure
        collateral_held = collateral_held + collateral_posted
        print 'collateral_held:%f' % collateral_held
        nettingset_npv = nettingset_npv + IA2 - IA1
        print 'nettingset_npv:%f' % nettingset_npv
        collateral_required = max(nettingset_npv - threshold2, 0) \
                              - max(-nettingset_npv - threshold1, 0) - collateral_held
        print 'collateral_required:%f' % collateral_required
        collateral_posted = collateral_required
        if collateral_posted > 0:
            if collateral_posted < MTA2:
                collateral_posted = 0.
        elif -collateral_posted < MTA1:
            collateral_posted = 0.
        print 'collateral_posted:%f' % collateral_posted
        if collateral_posted <> 0. and rounding <> 0.:
            collateral_posted = math.ceil(collateral_posted/rounding) * rounding
        print 'collateral_posted after rounding:%f' % collateral_posted
        if collateral_posted < 0:
            collateral_held = collateral_held + collateral_posted
            collateral_posted = 0.
        print 'collateral_held:%f' % collateral_held
        # we have collateralized netting set NPV
        npvMat[iT,1] = nettingset_npv - collateral_held
        print 'npvMat:%f' % npvMat[iT,1]
        print '========================================='
    return npvMat

Below is the driver program initialization and argument parsing.

if __name__ == "__main__":

    if len(sys.argv) != 10:
        print('Usage: ' + sys.argv[0] + ' <num_of_simulations> <num_of_partitions> <hull_white_a> <hull_white_sigma> <usd_swap_curve><eur_swap_curve><libor_fixings><instruments><output_dir>')
        sys.exit(1)
    conf = SparkConf().setAppName("SPARK-PFE")
    sc  = SparkContext(conf=conf)
    sc.setLogLevel('INFO')

    args = {'NSim':sys.argv[1], 'NPartitions': sys.argv[2], 'a': sys.argv[3], 'sigma':sys.argv[4],
            'usd_swap_curve_file': sys.argv[5], 'eur_swap_curve_file': sys.argv[6], 'libor_fixings_file': sys.argv[7],
            'instruments_file': sys.argv[8], 'output_dir': sys.argv[9]}

    main(sc, args)

The spark-submit script used to submit this job looks as follows:

PYSPARK_DRIVER_PYTHON=/opt/anaconda/anaconda2/bin/python \
PYSPARK_PYTHON=/opt/anaconda/anaconda2/bin/python \
spark-submit \
--deploy-mode cluster \
--master yarn \
--conf spark.driver.extraLibraryPath="${LD_LIBRARY_PATH}" \
--conf spark.executorEnv.LD_LIBRARY_PATH="${LD_LIBRARY_PATH}"  \
--num-executors 14 \
--conf spark.yarn.executor.memoryOverhead=5120 \
--executor-memory 43G \
--conf spark.yarn.driver.memoryOverhead=5120 \
--driver-memory 43G \
--executor-cores 12 \
--driver-cores 3 \
--conf spark.default.parallelism=168 \
spark-pfe.py 5000 48 0.376739 0.0209835 \
s3://spark-pfe/usd-libor-swap-curve.csv \
s3://spark-pfe/eur-libor-swap-curve.csv \
s3://spark-pfe/USD3MLibor-Fixings.csv \
s3://spark-pfe/instruments.csv \
s3://spark-pfe/output

After the spark job completes, the output files which are time-grid array and NPV cube are stored in an S3 bucket. We will use another Python script to pull the files and visualize the PFE which will be the subject of Part II of this post.

 

Swap Curve construction using Global Optimization

In my previous post, I discussed option Implied Volatility and Binomial Model. In this post, we switch gears and discuss Swap Curve construction. Building a swap curve is so fundamental to interest rate derivatives pricing that it is probably one of most closely guarded proprietary information on the trading desk.

Pricing an interest rate swap involves bootstrapping a blended curve of different instruments based on their maturities and market liquidity. Usually cash deposits, Eurodollar futures are used at the short end and market swap rates are used at the long end. At the long end, however, we have only a subset of market swap rates available and bootstrapping requires all the missing rates to be interpolated from known rates. This makes interpolation methodology a critical part of curve building. Also, since forward rates are a gradient of the discount rates, any misalignment in the latter is magnified in the former.

There is an alternative approach to bootstrapping called global optimization approach, where the short end of the curve is bootstrapped as usual but at the longer end we “guess” the forward rates, compute the par rates of the market swaps and minimize the error between the actual par rates and the computed par rates. We also add a smoothness constraint to the minimization procedure so that overall gradient of the curve is minimized. This approach is illustrated in the excellent book Swaps and Other derivatives 2nd Ed by Richard Flavell

I will use QuantLib to generate swap schedules and to deal with business day conventions. QuantLib can of course generate a fully built swap curve but I will use Scipy’s optimize package for curve building. My objective was to match Spreadsheet 3.9 “Building a blended curve” from the above book. Unfortunately, the spreadsheet does not actually show the equations for Excel’s Solver used for optimization but shows the final result, so that leaves considerable ambiguity in understanding which I hope I will be able to clear.

Note on QuantLib and Python

There are numerous resources online on how to build QuantLib from sources and then build the Python extensions, I would like to point you to the precompiled package for QuantLib-Python maintained by Christoph Gohlke. If you are on windows, you can just install the whl package and get started.

First some common formulae we will be using:

$$Discount Factor : DF_t = \frac{1}{(1 + r_t * d_t)}$$ where $d_t$ is year fraction and $r_t$ is annual rate

$$Forward Rate : F_{t/T} = \frac{[(DF_t/DF_T)- 1]}{(T-t)}$$ where $DF_t$ is disocunt factor to t and $DF_T$ is disocunt factor to T (both from today)

$$Par  Rate  of  Swap: \frac{(1-D_T)}{\sum_{n=1}^n(\Delta_n * D_n)}$$ where $D_T$ is maturity date discount factor, $\Delta_n$ are the time fractions between 2 reset dates and $D_n$ are the various reset date discount factors.

import QuantLib as ql
import numpy as np
from scipy.optimize import least_squares
from scipy.interpolate import interp1d
import math
import timeit
import matplotlib.pyplot as plt

# we are trying to match Flavell's spreadsheet 3.9
today = ql.Date(4, ql.February,2008)
calendar = ql.JointCalendar(ql.UnitedStates(), ql.UnitedKingdom())
ql.Settings.instance().evaluationDate = today
settle_date = calendar.advance(today, 2, ql.Days)
#define a Swap class to keep state of various dates and year fractions

class MySwap(object):
    def __init__(self, floating_dates, fixed_dates, fixed_rate):
        super(MySwap, self).__init__()
        self.fixed_rate = fixed_rate
        self.floating_dates = floating_dates
        self.fixed_dates = fixed_dates
        self.floating_leg_from_dates = floating_dates[:-1]
        self.floating_leg_to_dates = floating_dates[1:]
        self.fixed_leg_from_dates = fixed_dates[:-1]
        self.fixed_leg_to_dates = fixed_dates[1:]
        self.yf_floating_leg = [ql.Thirty360(ql.Thirty360.BondBasis).yearFraction(d1,d2) for d1,d2
                                in zip(self.floating_leg_from_dates,self.floating_leg_to_dates)]
        self.yf_fixed_leg = [ql.Thirty360(ql.Thirty360.BondBasis).yearFraction(d1, d2) for d1, d2
                                in zip(self.fixed_leg_from_dates, self.fixed_leg_to_dates)]
    
    # for sorting 2 swaps by maturity date   
    def __gt__(self, swap2):
        return self.floating_leg_to_dates[-1] > swap2.floating_leg_to_dates[-1]
#class for Eurodollar futures

class MyFuture(object):
    def __init__(self, begin_date, mkt_yield):
        super(MyFuture, self).__init__()
        self.begin_date = begin_date
        self.mkt_yield=  mkt_yield
#class for Cash deposits

class MyDeposit(object):
    def __init__(self, begin_date, end_date, mkt_yield):
        super(MyDeposit, self).__init__()
        self.begin_date = begin_date
        self.end_date = end_date
        self.mkt_yield=  mkt_yield
        self.yf = ql.Thirty360(ql.Thirty360.BondBasis).yearFraction(begin_date, end_date)
        self.disc_factor = 1/(1+(self.yf*mkt_yield))
# A utility function to recursively calculate a series of discount factors from year fractions
# and forward rates

def get_disc_factors(yf, rates):
    df = [0] * len(yf)
    df[0] = 1.0 / (1.0 + (yf[0] * rates[0]))
    for i in range(1, len(yf)):
        df[i] = 1.0 / (1.0 + (yf[i] * rates[i])) * df[i - 1]
    return df
# this is the objective function which will be minimized using least squares method
# we calculate the par swap rate from the "guessed" forward rates 
# and minimize the difference between this par swap rate and the market quoted par swap rate
# the first 4 forward rates have been derived from deposit and Eurodollar futures and
# are known

#@profile
def scipy_func_to_minimize(fwdGuesses):

    x = np.concatenate([known_fwds,fwdGuesses])
    df_float_leg_longest_swap = get_disc_factors(swaps[-1].yf_floating_leg, x)
    
    def par_rate(yf,df):
        up = 1 - df[-1]
        down = np.sum([y*d for y,d in zip(yf,df)])
        return up/down

    def swap_rate(swap):

        df = np.interp([d.serialNumber() for d in swap.fixed_leg_to_dates],
                       [dl.serialNumber() for dl in swaps[-1].floating_leg_to_dates],
                       df_float_leg_longest_swap)
        return par_rate(swap.yf_fixed_leg, df)

    fi = [swap_rate(s) - s.fixed_rate for s in swaps]
    sq = smoothness_func(x)
    fi.append(sq)
    return fi
# This function ensures a smooth forward curve by minimizing the gradient of the 
# forward rates
# This function is added as an additional constraint to the above 
# objective function

def smoothness_func(fwdGuesses):
    sq = 0.0
    for i, f in enumerate(fwdGuesses[:-1]):
        sq += math.pow((fwdGuesses[i] - fwdGuesses[i + 1]), 2)
    return sq
# A utility function to define a vanilla IRS, specifically USD-3M-Libor 
# fixed-float swap

def makeVanilla1Y3MSwap(maturity, fixedRate, index):

    end = calendar.advance(settle_date, maturity)
    fixedLegTenor = ql.Period("1y")
    fixedLegBDC = ql.ModifiedFollowing
    fixedLegDC = ql.Thirty360(ql.Thirty360.BondBasis)
    spread = 0.0
    fixedSchedule = ql.Schedule(settle_date,
                                end,
                                fixedLegTenor,
                                ql.TARGET(),
                                fixedLegBDC,
                                fixedLegBDC,
                                ql.DateGeneration.Forward,
                                False)
    floatSchedule = ql.Schedule(settle_date,
                                end,
                                index.tenor(),
                                index.fixingCalendar(),
                                index.businessDayConvention(),
                                index.businessDayConvention(),
                                ql.DateGeneration.Forward,
                                False)
    swap = ql.VanillaSwap(ql.VanillaSwap.Payer,
                          1.0,
                          fixedSchedule,
                          fixedRate,
                          fixedLegDC,
                          floatSchedule,
                          index,
                          spread,
                          index.dayCounter())
    return swap
# Market data as of 2/4/2008

libor3m = ql.USDLibor(ql.Period(3, ql.Months))
deposits = [(calendar.advance(settle_date, ql.Period(1, ql.Weeks)),0.032175),
            (calendar.advance(settle_date, ql.Period(1, ql.Months)),0.031813),
            (calendar.advance(settle_date, ql.Period(3, ql.Months)),0.03145)]

depos = [MyDeposit(settle_date, t[0], t[1]) for t in deposits]

# ED Futures start dates : Mar-08,Jun-08,Sep-08,Dec-08,Mar-09
fQuotes =   [97.000,97.410,97.520,97.495,97.395]
futures = []

date_holder = settle_date
for fz in fQuotes:
    fut_begin_date = ql.IMM_nextDate(date_holder)
    futures.append(MyFuture(fut_begin_date, (100.-fz)/100.))
    date_holder = fut_begin_date

ql_swaps = [makeVanilla1Y3MSwap(ql.Period(2, ql.Years),2.795e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(3, ql.Years),3.035e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(4, ql.Years),3.275e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(5, ql.Years),3.5050e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(6, ql.Years),3.715e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(7, ql.Years),3.885e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(8, ql.Years),4.025e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(9, ql.Years),4.155e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(10, ql.Years),4.265e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(12, ql.Years),4.435e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(15, ql.Years),4.615e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(20, ql.Years),4.755e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(25, ql.Years),4.805e-2,libor3m),
            makeVanilla1Y3MSwap(ql.Period(30, ql.Years),4.815e-2,libor3m)]

swaps = [MySwap([d for d in x.floatingSchedule()], [d for d in x.fixedSchedule()], x.fixedRate()) for x in ql_swaps]
swaps.sort()
#Starting to build fwd curve, add deposits first

fwd_curve = [(dep[0], dep[1]) for dep in deposits]

# get next 3 forward rates from futures rates by interpolating on the 
# provided rates 

fut_curve_dates = [calendar.advance(settle_date, ql.Period(3*i, ql.Months)) for i in range(1, 5)]
fut_interpolation = interp1d([f.begin_date.serialNumber() for f in futures], [f.mkt_yield for f in futures])
fwd_curve.extend([(fut_curve_dates[idx], fut_interpolation(fut_curve_dates[idx-1].serialNumber())[()]) 
                  for idx, f in enumerate(fut_curve_dates) if idx > 0])
# At this point we know first 4 forward rates 
# and we guess the next 116 forwards to be same as the 4th forward rate
known_fwds = []
for d in fut_curve_dates:
    known_fwds.extend([f[1] for f in fwd_curve if f[0] == d])
fwd_guesses = [known_fwds[-1]] * (len(swaps[-1].floating_leg_to_dates)-len(known_fwds))
# Now we proceed with optimization and get the optimized forwards 
# that provide the arbitrage-free and smooth forward curve

start_time = timeit.default_timer()
x = least_squares(scipy_func_to_minimize, fwd_guesses, verbose=1).x
print('### Scipy optimization took:%s seconds' % str(timeit.default_timer() - start_time))
optimized_fwds = np.concatenate([known_fwds, x])

`gtol` termination condition is satisfied.
Function evaluations: 20, initial cost: 1.7812e-03, final cost 1.5585e-09, first-order optimality 1.04e-08.
### Scipy optimization took:2.5262434 seconds

#Start building a discount curve as a list of tuples (date,disc_factor), 
# first discount factor = 1.0

dates_df = [(settle_date, 1.0)]
for idx, fwd in enumerate(optimized_fwds):
    temp_tuple = (swaps[-1].floating_leg_to_dates[idx], 
                  dates_df[idx][1]/(1 + (optimized_fwds[idx] * swaps[-1].yf_floating_leg[idx])))
    dates_df.append(temp_tuple)
#Start building a zero curve

zc_rates = []
cumulative_floating_yf = np.cumsum(swaps[-1].yf_floating_leg)
cumulative_floating_yf = np.insert(cumulative_floating_yf, 0, 0.)
entire_date_range = swaps[-1].floating_leg_from_dates + [swaps[-1].floating_leg_to_dates[-1]]
zc_rates = [ (dt, -np.log(dc[1])/yf) for idx, (dc, yf, dt) in enumerate(zip(dates_df,cumulative_floating_yf, entire_date_range)) if idx > 0]
zc_temp_tup = (swaps[-1].floating_leg_from_dates[0], zc_rates[0][1])
zc_rates.insert(0, zc_temp_tup)
#We already have forward rates, just put them in a list of tuples

fwd_rates = [(d,fwd) for d, fwd in zip(swaps[-1].floating_leg_from_dates, optimized_fwds)]
# Plot all 3 curves

_ = plt.figure(figsize=(10, 8.3))

_ = plt.subplot(221)
x_axis = cumulative_floating_yf[1:]
y_axis = list(f[1] for f in fwd_rates)
_ = plt.plot(x_axis, y_axis)
plt.grid()
_ = plt.xlabel('Time')
_ = plt.ylabel('Forward Rates')
_ = plt.title('Forward Curve')

_ = plt.subplot(222)
x_axis = cumulative_floating_yf
y_axis = list(d[1] for d in dates_df)
_ = plt.plot(x_axis, y_axis)
plt.grid()
_ = plt.xlabel('Time')
_ = plt.ylabel('Discount Factors')
_ = plt.title('Discount Curve')

_ = plt.subplot(223)
x_axis = cumulative_floating_yf
y_axis = list(z[1] for z in zc_rates)
_ = plt.plot(x_axis, y_axis)
plt.grid()
_ = plt.xlabel('Time')
_ = plt.ylabel('Zero Rates')
_ = plt.title('Zero Curve')
plt.show()

Option Volatility and Binomial Model

In my previous post Options and Volatility Smile , we used Black-Scholes formula to derive Implied Volatility from given option strike and tenor. Most of the options traded on exchanges are American (with a few index options being European) and can be exercised at any time prior to expiration. Whether it is optimal to exercise an option early depends on whether the stock pays dividend or the level of interest rates and is a very complex subject. What I want to focus on is using Binomial model to price an American option. I summarize below Binomial model theory from the excellent book Derivative Markets 3rd Ed. by Robert McDonald

There are many flavors of the Binomial model but they all have following steps in common:

  • Simulate future prices of underlying stock at various points in time until expiry
  • Calculate the payoff of the option at expiry
  • Discount the payoff back to today to get the option price

1. Simulate future prices of the underlying

Assume the price of a stock is $S_0$ and over a small time period $\Delta t$, the price could either be $S_0u$ or $S_0d$ where u is a factor by which price rises and d is a factor by which price drops. The stock is assumed to follow a random walk, also assume that $p$ is the probability of the stock price rising and $(1-p)$ is the probability of it falling. There are many ways to approach the values of $u$,$d$ and $p$ and the various Binomial models differ in the ways these three parameters are calculated.

In Cox-Ross-Rubinstein (CRR) model, $u = \frac{1}{d}$ is assumed. Since we have 3 unknowns, 2 more equations are needed and those come from risk neutral pricing assumption. Over a small $\Delta t$ the expected return of the stock is

$$pu + (1-p)d = e^{r \Delta t}$$

and the expected variance of the returns is

$$pu^2 + (1-p)d^2 – (e^{r \Delta t})^2 = \sigma ^2 \Delta t$$

Solving for $u$, $d$ and $p$, we get

$$p = \frac{e^{r\Delta t} – d}{u-d}$$

$$u = e^{\sigma \sqrt{\Delta t}}$$

$$d = e^{-\sigma\sqrt{\Delta t}}$$

The CRR model generates a following tree as we simulate multi step stock price movements, this a recombining tree centered around $S_0$.


2. Calculating payoffs at expiry

In this step, we calculate the payoff at each node that corresponds to expiry.

For a put option, $payoff = max(K – S_N, 0)$

For a call option, $payoff = max(S_N – K, 0)$

where $N$ is node at expiry with a stock price $S_N$ and $K$ is the strike.

3. Discounting the payoffs

In this step, we discount the payoffs at expiry back to today using backward induction where we start at expiry node and step backwards through time calculating option value at each node of the tree.

For American put, $V_n = max(K – S_n, e^{-r \Delta t} (p V_u + (1-p) V_d))$

For American call, $V_n = max(S_n – K, e^{-r \Delta t} (p V_u + (1-p) V_d))$

$V_n$ is the option value at node n

$S_n$ is the stock price at node n

$r$ is risk free interest rate

$\Delta t$ is time step

$V_u$ is the option value from the upper node at n+1

$V_d$ is the option value from the lower node at n+1

All the variants of Binomial model, including CRR, converge to Black-Scholes in the limit $\Delta t \to 0$ but the rate of convergence is different. The variant of Binomial model that I would like to use is called Leisen-Reimer Model which converges much faster. Please see the original paper for formulas and a C++ implementation at Volopta.com  which I have ported to Python in the next section.

The python code is going to look very similar to Options and Volatility Smile post except that we will swap out Black-Scholes framework with Leisen-Reimer model. We will also use the same option chain data AAPL_BBG_vols.csv

A note on Python code

I usually do not write code like below, I am purposely avoiding using any classes so that the focus remains on the objective which is to understand the technique.

import math
import numpy as np
import matplotlib as mpl
import pandas as pd
import matplotlib.pyplot as plt
from scipy.integrate import quad
from scipy.optimize import brentq
from scipy.interpolate import interp2d
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm

def BinomialLeisenReimer(S0, K, r, T, N, sigma, div):

    is_call = False

    odd_N = N if (N % 2 == 1) else (N + 1)

    d1 = (math.log(S0 / K) + ((r - div) + (sigma ** 2) / 2.) * T) / (sigma * math.sqrt(T))

    d2 = (math.log(S0 / K) + ((r - div) - (sigma ** 2) / 2.) * T) / (sigma * math.sqrt(T))

    dt = T / odd_N
    r_n = math.exp((r - div) * dt)
    method = 2.
    Term1 = math.pow((d1 / (N + 1.0 / 3.0 - (1 - method) * 0.1 / (N + 1))), 2) * (N + 1.0 / 6.0)
    pp = 0.5 + math.copysign(1, d1) * 0.5 * math.sqrt(1 - math.exp(-Term1))

    Term2 = math.pow((d2 / (N + 1.0 / 3.0 - (1 - method) * 0.1 / (N + 1))), 2) * (N + 1.0 / 6.0)
    p = 0.5 + math.copysign(1, d2) * 0.5 * math.sqrt(1 - math.exp(-Term2))

    u = r_n * pp / p
    d = (r_n - p * u) / (1 - p)

    qu = p
    qd = 1 - p

    #Initialize price tree

    STs = [np.array([S0])]
    # Simulate the possible stock prices path
    for i in range(N):
        prev_branches = STs[-1]
        st = np.concatenate((prev_branches * u, [prev_branches[-1] * d]))
        STs.append(st)  # Add nodes at each time step

    #Initialize payoff tree
    payoffs = np.maximum(0, (STs[N]-K) if is_call else (K-STs[N]))

    def check_early_exercise(payoffs, node):
        early_ex_payoff = (STs[node] - K) if is_call else (K - STs[node])
        return np.maximum(payoffs, early_ex_payoff)

    #Begin tree traversal
    for i in reversed(range(N)):
        # The payoffs from NOT exercising the option
        payoffs = (payoffs[:-1] * qu + payoffs[1:] * qd) * df
        # Payoffs from exercising, for American options
        early_ex_payoff = (STs[i] - K) if is_call else (K - STs[i])
        payoffs = np.maximum(payoffs, early_ex_payoff)

    return payoffs[0]

headers = ['Date','Strike','Call_Bid','Call_Ask','Call','Maturity','Put_Bid','Put_Ask','Put']
dtypes = {'Date': 'str', 'Strike': 'float', 'Call_Bid': 'float', 'Call_Ask': 'float',
          'Call':'float','Maturity':'str','Put_Bid':'float','Put_Ask':'float','Put':'float'}
parse_dates = ['Date', 'Maturity']
data = pd.read_csv('AAPL_BBG_vols.csv',skiprows=1,header=None, names=headers, dtype=dtypes, parse_dates=parse_dates)
data['LR_Imp_Vol'] = 0.0
data['TTM'] = 0.0
r = 0.0248 #risk-free rate
S0 = 153.97 # spot price as of 5/11/2017
div = 0.0182
for i in data.index:
    t = data['Date'][i]
    T = data['Maturity'][i]
    K = data['Strike'][i]
    Put = data['Put'][i]
    sigma_guess = 0.2
    time_to_maturity = (T - t).days/365.0
    data.loc[i, 'TTM'] = time_to_maturity
    def func_LR(sigma):
        return BinomialLeisenReimer(S0, K, r, time_to_maturity, ((T - t).days) - 1, sigma, div) - Put

    lr_imp_vol = brentq(func_LR, 0.03, 1.0)
    data.loc[i, 'LR_Imp_Vol'] = lr_imp_vol

ttm = data['TTM'].tolist()
strikes = data['Strike'].tolist()
imp_vol = data['LR_Imp_Vol'].tolist()
f = interp2d(strikes,ttm,imp_vol, kind='cubic')

plot_strikes = np.linspace(data['Strike'].min(), data['Strike'].max(),25)
plot_ttm = np.linspace(0, data['TTM'].max(), 25)
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(plot_strikes, plot_ttm)
Z = np.array([f(x,y) for xr, yr in zip(X, Y) for x, y in zip(xr,yr) ]).reshape(len(X), len(X[0]))
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,linewidth=0.1)
ax.set_xlabel('Strikes')
ax.set_ylabel('Time-to-Maturity')
ax.set_zlabel('Implied Volatility')
fig.colorbar(surf, shrink=0.5, aspect=5)

Options and Volatility Smile

An equity option represents the right to buy (“call” option) or sell (“put” option) a unit of underlying stock at a pre-specified price (strike) at a predetermined maturity date (European option) or at any time up to the predetermined date (American option).

Option writer sells an option and option holder buys an option.

For a European call option on an index with strike 8,000 and index level of 8200 at maturity, the option holder receives the difference of $200 from option writer. This is called the instrinsic value or payoff of the option from the holder’s point of view.

The payoff function for a call option is $$ h_{T}(S,K) = max[S_{T}-K, 0] \tag{Eq. 1}$$

where T = maturity date, $\ S_T $ is the index level at maturity and K is the strike price.

In-the-money: a call (put) is in-the-money when S > K (S < K)
At-the-money: call or put is at-the-money when $\ S \approx K $
Out-of-the-money: a call is out-of-the-money when S < K (S > K)

A fair present value (is different than payoff) of a European call option is given by Black-Scholes formula:

$\ C_{0}^{*} = C^{BSM}(S_{0},K,T,r,\sigma) \tag{Eq. 2}$

$\S_{0} $ current index level (spot)
K strike price of the option
T time-to-maturity of the option
r risk-free short rate
$\ \sigma $ volatility or the std dev of the index returns

$\ C^{BSM} = S_{t} . N(d_{1}) – e^{r(T-t)} . K. N(d_{2})\tag{Eq. 3} $

where

$\displaystyle N(d) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{d} e^{-\frac{1}{2}x^{2}} dx$

$\displaystyle d1 = \frac{\log\frac{S_{t}}{K} + (r + \frac{\sigma^2}{2})(T-t)}{\sigma\sqrt{T-t}}$

$\displaystyle d2 = \frac{\log\frac{S_{t}}{K} + (r – \frac{\sigma^2}{2})(T-t)}{\sigma\sqrt{T-t}}$

 

# BSM option valuation
import math
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.integrate import quad
from scipy.optimize import brentq
from scipy.interpolate import interp2d
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
## Helper functions ##
def dN(x):
    ''' Probability density function of standard normal random variable x.'''
    return math.exp(-0.5 * x ** 2) / math.sqrt(2 * math.pi)

def N(d):
    ''' Cumulative density function of standard normal random variable x. '''
    return quad(lambda x: dN(x), -20, d, limit=50)[0]

def d1f(St, K, t, T, r, sigma):
    ''' Black-Scholes-Merton d1 function.
        Parameters see e.g. BSM_call_value function. '''
    d1 = (math.log(St / K) + (r + 0.5 * sigma ** 2)
        * (T - t)) / (sigma * math.sqrt(T - t))
    return d1

def BSM_call_value(St, K, t, T, r, sigma):
    ''' Calculates Black-Scholes-Merton European call option value
        Parameters
    ==========
    St: float
    stock/index level at time t
    K: float
    strike price
    t: float
    valuation date
    T: float
    date of maturity/time-to-maturity if t = 0; T > t
    r: float
    constant, risk-less short rate
    sigma: float
    volatility
    Returns
    =======
    call_value: float
    European call present value at t
    '''
    d1 = d1f(St, K, t, T, r, sigma)
    d2 = d1 - sigma * math.sqrt(T - t)
    call_value = St * N(d1) - math.exp(-r * (T - t)) * K * N(d2)
    return call_value

def BSM_put_value(St, K, t, T, r, sigma):
    ''' Calculates Black-Scholes-Merton European put option value.
    Parameters
    ==========
    St: float
    stock/index level at time t
    K: float
    strike price
    t: float
    valuation date
    T: float
    date of maturity/time-to-maturity if t = 0; T > t
    r: float
    constant, risk-less short rate
    sigma: float
    volatility
    Returns
    =======
    put_value: float
    European put present value at t
    '''
    put_value = BSM_call_value(St, K, t, T, r, sigma) - St + math.exp(-r * (T - t)) * K
    return put_value
# Test Option Payoff and Time value
K = 8000 # Strike price
T = 1.0 # time-to-maturity
r = 0.025 # constant, risk-free rate
vol = 0.2 # constant volatility

# Generate spot prices 
S = np.linspace(4000, 12000, 150)
h = np.maximum(S - K, 0) # payoff of the option
C = [BSM_call_value(Szero, K, 0, T, r, vol) for Szero in S] #BS call option values

plt.figure()
plt.plot(S, h, 'b-.', lw=2.5, label='payoff') # plot inner value at maturity
plt.plot(S, C, 'r', lw=2.5, label='present value') # plot option present value
plt.grid(True)
plt.legend(loc=0)
plt.xlabel('index level $S_0$')
plt.ylabel('present value $C(t=0)$')

The present value of the option is always higher than the undiscounted payoff, the difference being the time value. In other words, the option’s present value is composed of payoff plus the time value. Time value indicates that there is always a chance of option going in-the-money or more in-the-money during that time.

Simulating Returns

The Geometric Brownian motion model of the BS equation is given by

$$\displaystyle dS_{t} = rS_{t}dt + \sigma S_{t} dt dZ_{t}\tag{Eq.4}$$

The discretized version is

$$\displaystyle S_{t} = S_{t – \Delta t} e^{(r – \frac{1}{2}\sigma^2) \Delta t + \sigma \sqrt{\Delta t} z_{t}}\tag{Eq.5}$$

where t $\in {(\Delta t, 2\Delta t,…..,T)}$

Using the above discretized version, we will simulate the spot prices with $S_{0}$=100, T=10, r = 0.05 and $\sigma$=0.2

import pandas as pd

# model parameters
S0 = 100.0 # initial index level
T = 10.0 # time horizon
r = 0.05 # risk-less short rate
vol = 0.2 # instantaneous volatility
# simulation parameters
np.random.seed(250000)
#generate a pd array with business dates, ignores holidays
gbm_dates = pd.DatetimeIndex(start='10-05-2007',end='10-05-2017',freq='B')
M = len(gbm_dates) # time steps
I = 1 # index level paths
dt = 1 / 252. # 252 business days a year
df = math.exp(-r * dt) # discount factor

# stock price paths
rand = np.random.standard_normal((M, I)) # random numbers
S = np.zeros_like(rand) # stock matrix
S[0] = S0 # initial values
for t in range(1, M): # stock price paths using Eq.5
    S[t] = S[t - 1] * np.exp((r - vol ** 2 / 2) * dt + vol * rand[t] * math.sqrt(dt))
#create a pd dataframe with date as index and a column named "spot"
gbm = pd.DataFrame(S[:, 0], index=gbm_dates, columns=['spot']) 
gbm['returns'] = np.log(gbm['spot'] / gbm['spot'].shift(1)) #log returns
# Realized Volatility 
gbm['realized_var'] = 252 * np.cumsum(gbm['returns'] ** 2) / np.arange(len(gbm))
gbm['realized_vol'] = np.sqrt(gbm['realized_var'])
print gbm.head()
gbm = gbm.dropna()
               spot   returns realized_var realized_vol 
2007-10-08 98.904295 -0.011018 0.030589    0.174898
2007-10-09 98.444867 -0.004656 0.018026    0.134261
2007-10-10 97.696364 -0.007632 0.016911    0.130041
2007-10-11 98.280594 0.005962  0.014922    0.122158
plt.figure(figsize=(9, 6))
plt.subplot(211)
gbm['spot'].plot()
plt.ylabel('daily quotes')
plt.grid(True)
plt.axis('tight')
plt.subplot(212)
gbm['returns'].plot()
plt.ylabel('daily log returns')
plt.grid(True)
plt.axis('tight')


Implied Volatility is the value of $\sigma$ that solves Eq. 2 given the option market quote $C_{0}^{*}$

Volatility surface is the plot of the implied volatilities for different option strikes and different option maturities on the same underlying (an option chain).

Vol Surfaces exhibit :
Smiles: option implied volatilities exhibit a smile form, i.e. for calls the OTM implied volatilities are higher than the ATM ones; sometimes they rise again for ITM options
term structure:smiles are more pronounced for short-term options than for longer-term options; a phenomenon sometimes called volatility term structure

To demonstrate Vol Surface, I will use an option chain on AAPL stock as of 5/11/2017. I have downloaded this data from a reputable vendor, you can find this file here AAPL_BBG_vols

headers = ['Date','Strike','Call_Bid','Call_Ask','Call','Maturity','Put_Bid','Put_Ask','Put']
dtypes = {'Date': 'str', 'Strike': 'float', 'Call_Bid': 'float', 'Call_Ask': 'float',
          'Call':'float','Maturity':'str','Put_Bid':'float','Put_Ask':'float','Put':'float'}
parse_dates = ['Date', 'Maturity']

data = pd.read_csv('AAPL_BBG_vols.csv',skiprows=1,header=None, names=headers, dtype=dtypes, parse_dates=parse_dates)
data['BS_Imp_Vol'] = 0.0
data['TTM'] = 0.0
r = 0.0248 #risk-free rate
S0 = 153.97 # spot price as of 5/11/2017
div = 0.0182
for i in data.index:
    t = data['Date'][i]
    T = data['Maturity'][i]
    K = data['Strike'][i]
    Put = data['Put'][i]
    time_to_maturity = (T - t).days/365.0
    data.loc[i, 'TTM'] = time_to_maturity
    def func_BS(sigma):
        return BSM_put_value(S0, K, 0, time_to_maturity, r, sigma) - Put

    bs_imp_vol = brentq(func_BS, 0.03, 1.0)
    data.loc[i,'BS_Imp_Vol'] = bs_imp_vol

Once we have the implied volatilities, we will generate a grid of strikes and maturities and use Cubic interpolation to derive the missing implied volatilities needed for a smooth surface.

ttm = data['TTM'].tolist()
strikes = data['Strike'].tolist()
imp_vol = data['BS_Imp_Vol'].tolist()
f = interp2d(strikes,ttm,imp_vol, kind='cubic')

plot_strikes = np.linspace(data['Strike'].min(), data['Strike'].max(),25)
plot_ttm = np.linspace(0, data['TTM'].max(), 25)
fig = plt.figure()
ax = fig.gca(projection='3d')
X, Y = np.meshgrid(plot_strikes, plot_ttm)
Z = np.array([f(x,y) for xr, yr in zip(X, Y) for x, y in zip(xr,yr) ]).reshape(len(X), len(X[0]))
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,linewidth=0.1)
ax.set_xlabel('Strikes')
ax.set_ylabel('Time-to-Maturity')
ax.set_zlabel('Implied Volatility')
fig.colorbar(surf, shrink=0.5, aspect=5)