Estimating Potential Future Exposure with QuantLib and AWS EMR – Part II

In my previous post, we saw how to submit a Pyspark job to AWS EMR cluster. In this post, I will go over the setup of the cluster.

Before we start with the cluster, we must have a certificate keypair (.pem file) and a security group setup in AWS. There are many resources available online for this and I will not go into the details.

Next is selecting the custom AMI that I mentioned in my previous post as a base machine for all nodes in the cluster so that Anaconda Python and QuantLib are already installed on the nodes.  The AMI that I used is publicly available for now, it costs me about $5 per month to keep it alive. The details of the AMI are as shown below. I selected 1 m3xlarge (4 cores, 15 GB) machine for my master node and 3 r4x8large (16 cores, 244 GB) machines for my worker nodes. I also always select spot pricing for all my nodes. Once the cluster is up and running (in waiting mode), first you need to secure copy spark-pfe.py file (available at my github repo spark-pfe github repo) to the master node and then you can SSH into master node to run the job using spark-submit script from previous post. It took about 7 minutes for Pyspark job to complete, it computed netting set NPV for 5000 simulations across future 454 dates for 2 swaps and 1 FxFwd. The output of the simulation was written to S3 bucket and now its time to pull it onto local machine for analysis. Loading the NPV cube on a local machine is fine for demonstration purposes as we have only 5000 simulations but I would load the NPV cube into Amazon Redshift or AuroraDB for production purposes. We will use boto3 library for downloading the files from S3. Once we have the time grid and NPV cube in memory, we can do some plots to visualize the simulated exposure paths. The Blue paths are for Collateralized exposures and Red are for Uncollateralized. Now we can calculate and plot the Expected Exposure where we just take the positive exposures from the above simulated exposure paths. Now we can plot the PFE curves where we take the 95% quantile of above expected exposures. We can now calculate the Maximum PFE for both the curves. Maximum Uncollateralized PFE: 260,962.61 Maximum Collateralized PFE: 252,916.08 The spark-submit script that I have used is by no means optimized. I have tried tuning the various spark memory and number of executors parameters in a trial and error approach within a limited time frame I had. The configuration I came up with is optimal for the cluster I used but I am sure it can be improved. Thanks for stopping by. Estimating Potential Future Exposure with QuantLib and AWS EMR – Part I Counterparty risk is the risk that a party to an OTC derivatives contract may fail to perform on its contractual obligations, causing losses to the other party. Credit exposure is the actual loss in the event of a counterparty default. Some of the ways to reduce counterparty risk : Netting: Offset positive and negative contract values with the same counterparty reduces exposure to that counterparty Collateral: Holding cash or securities against an exposure Central counterparties (CCP): Use a third party clearing house as a counterparty between buyer and seller and post margin Potential Future Exposure (PFE) is a measure of credit risk and is the worst exposure one could have to a counterparty at a certain time in future with a certain level of confidence. For example, for a PFE of$100,000 with 95% confidence, we expect to have an exposure (loss in case counterparty defaults with no recovery) greater than 100,000 in only 5% of scenarios. Netting set is a group of OTC trades (could be interest rate swaps, FxFwds or CCS) that are facing the same counterparty. An ISDA CSA agreement with that counterparty defines how an exposure is treated within the netting set but usually we can “net” the exposures of different instruments in the set which reduces the exposure. For example, a positive exposure on a swap could be netted with a negative exposure on FxFwd. Estimating PFE involves simulating future market risk scenarios, calculating “netted” MtM values of OTC trades that are facing the same counterparty at various dates in future at each scenario and taking only the positive MtMs which represent our exposure to counterparty, then taking let’s say 95% quantile of the peak exposures. For an interest rate swap, market risk factor is the underlying forward curve which determines the NPV of the floating leg. For an FxFwd, its the forward interest curves for the two currencies and the forward FX rate. In this post, to generate future scenarios of the curves, I use Hull-White one factor short rate model which is assumed to be calibrated. There are many excellent resources available online which discuss interest rate models, PFE and QuantLib/Python in general, some of which I have used here are: Hull White Term Structure Simulations Expected Exposure and PFE Simulation Derivatives CVA Calculation The focus of this post is to provide a proof of concept of estimating PFE on Amazon Web Service’s ElasticMapReduce (EMR). AWS EMR provides a complete platform of Hadoop ecosystem tools to process vast amounts of data in dynamically scalable (hence “elastic”) environment. I will use Apache Spark hosted in EMR cluster to generate future curve simulations and perform NPV calculations on various dates for each of the scenarios. As you can see, for a realistic number of simulations, the number of calculations needed for a netting set comprised of a few swaps will easily exceed the capacity of a single machine. Challenges and Issues: But the devil’s always in the details, right? Here are some of the challenges I faced with this whole setup involving Amazon EMR, QuantLib and Python and ways to overcome them: While developing my Spark PFE “job” to be submitted to EMR cluster, I needed to spin a cluster on demand. Obviously, I needed to terminate it as soon as I was done or else I will continue to be charged for it. This means that, every time I spin a new cluster, I will get fresh EC2 instances with no QuantLib or Anaconda Python installed on them. Now, installing Anaconda Python and compiling Boost and then QuantLib and then QuantLib SWIG wrappers and then making all of it work seamlessly is not for the faint of heart 🙂 But there is a way to solve the EC2 problem. AWS has something called AMI (Amazon Machine Image) which is really a base machine you can build your cluster with. So I spin up an EC2 instance, install/compile all the software I need and then save it as an AMI and use it as base machine for the cluster so that QuantLib/Python is already setup on all nodes from the get go. However, AWS EMR requires that if you are going to use custom AMI for cluster nodes, it must be based on Amazon’s own Linux AMI. Amazon Linux AMI is loosely based on RHEL/CentOS but it’s not the same. I faced several issues in compiling Boost as well as QuantLib on this version of Linux and all of them were related to the gcc/g++ version that came with it which was really old (4.3) So I upgraded the gcc version to 6.4 and then installed Boost, QuantLib on top of it. In a very simplified view, Spark is a distributed architecture with a master node and number of worker nodes. All the input, output and intermediate data is stored in-memory as resilient distributed datasets (RDDs). The objects are serialized and are distributed to the worker nodes in order to be processed. However, I quickly realized that QuantLib/Python SWIG objects cannot be serialized (or pickled), so we must re-create QuantLib objects like Swaps, Dates and Curves from pure Python objects on the worker nodes each time we need them. I also wanted to incorporate collateral movements into my exposure calculations since all bilateral OTC swaps/FxFwds are required to be collateralized and wanted to see how collateral affects PFE. As it turns out, it reduces PFE but doesn’t completely eliminate it due to the margin period of risk (7 days in this post) and asymmetric collateral amounts delivered and posted due to Threshold and MTA requirements. I have referenced Jon Gregory’s Counterparty Credit Risk Second Edition book and spreadsheets for this purpose. I have also added FxFwd (EUR/USD in this case) to the netting set as we deal with large amounts of FxFwds on a day-to-day basis. But the FxFwd example in this post is not very realistic as there is only one FxFwd that matures in 1 year and will not be replaced. Usually, FxFwds are continually rolled over (also known as TARF) in a portfolio but this one FxFwd really twists the netting set NPV while it has not matured. The way I calculated FxFwd exposure at time t is as follows: forward points = spot rate(t) – forward rate YF(t) = year fraction from t until maturity of FxFwd, r_dom(t) = domestic currency zero rate FxFwd NPV(t) = ((spot rate + fwd points) * FxFwd notional)/( 1 + (r_dom(t) * YF(t))) FxFwd Exposure(t) = (FxFwd notional * spot rate(t)) – FxFwd NPV(t) The forward rate is the agreed upon rate and spot rate at time t was obtained from FX rate simulation using Garman-Kohlagen process. The r_dom(t) rate was obtained from the same yield curve simulation used for swap MtM calculations. The main stages of our Spark job are as follows: The Spark driver program loads the netting set instruments, USD swap curve , EUR swap curve, previous swap fixings (as the swaps are “seasoned” and have been traded a year ago) from Amazon S3 bucket. We determine the dates we want to estimate the netting set NPV based on a weekly collateral schedule plus any swap reset dates. We calculate today’s netting set NPV which will be the base from which the simulated NPVs will start from. We will generate a matrix of normally distributed random numbers using Spark Machine Learning library called MLLib which will return a RDD with random numbers distributed across the cluster in the form of partitions. The work to compute NPVs (Collateralized and Uncollateralized) will be distributed by Spark to available nodes (containers with executor threads) and will be done in parallel. At the driver, we wait for all the work to be done and then collect the results of all NPV simulations called “NPV Cube” and write them out to a file on Amazon S3 bucket and terminate the cluster. Another python script which runs on a local machine (not in cluster) then reads from this S3 bucket using boto3 library and visualizes/computes the final quantile calculation. Let’s dive into the code. All the code and input files used in this post are available at Spark-PFE github repo Here are the various input files used in this post. Obviously, the swap curves are not real but they are close to real. Libor fixings are real since the data is public domain. Below is the main method invoked by the spark-submit driver program. Below is the method that will be executed on the executors in parallel on the EMR cluster. Below is the driver program initialization and argument parsing. The spark-submit script used to submit this job looks as follows: After the spark job completes, the output files which are time-grid array and NPV cube are stored in an S3 bucket. We will use another Python script to pull the files and visualize the PFE which will be the subject of Part II of this post. Swap Curve construction using Global Optimization In my previous post, I discussed option Implied Volatility and Binomial Model. In this post, we switch gears and discuss Swap Curve construction. Building a swap curve is so fundamental to interest rate derivatives pricing that it is probably one of most closely guarded proprietary information on the trading desk. Pricing an interest rate swap involves bootstrapping a blended curve of different instruments based on their maturities and market liquidity. Usually cash deposits, Eurodollar futures are used at the short end and market swap rates are used at the long end. At the long end, however, we have only a subset of market swap rates available and bootstrapping requires all the missing rates to be interpolated from known rates. This makes interpolation methodology a critical part of curve building. Also, since forward rates are a gradient of the discount rates, any misalignment in the latter is magnified in the former. There is an alternative approach to bootstrapping called global optimization approach, where the short end of the curve is bootstrapped as usual but at the longer end we “guess” the forward rates, compute the par rates of the market swaps and minimize the error between the actual par rates and the computed par rates. We also add a smoothness constraint to the minimization procedure so that overall gradient of the curve is minimized. This approach is illustrated in the excellent book Swaps and Other derivatives 2nd Ed by Richard Flavell I will use QuantLib to generate swap schedules and to deal with business day conventions. QuantLib can of course generate a fully built swap curve but I will use Scipy’s optimize package for curve building. My objective was to match Spreadsheet 3.9 “Building a blended curve” from the above book. Unfortunately, the spreadsheet does not actually show the equations for Excel’s Solver used for optimization but shows the final result, so that leaves considerable ambiguity in understanding which I hope I will be able to clear. Note on QuantLib and Python There are numerous resources online on how to build QuantLib from sources and then build the Python extensions, I would like to point you to the precompiled package for QuantLib-Python maintained by Christoph Gohlke. If you are on windows, you can just install the whl package and get started. First some common formulae we will be using: $$Discount Factor : DF_t = \frac{1}{(1 + r_t * d_t)}$$ whered_t$is year fraction and$r_t$is annual rate $$Forward Rate : F_{t/T} = \frac{[(DF_t/DF_T)- 1]}{(T-t)}$$ where$DF_t$is disocunt factor to t and$DF_T$is disocunt factor to T (both from today) $$Par Rate of Swap: \frac{(1-D_T)}{\sum_{n=1}^n(\Delta_n * D_n)}$$ where$D_T$is maturity date discount factor,$\Delta_n$are the time fractions between 2 reset dates and$D_n$are the various reset date discount factors. gtol termination condition is satisfied. Function evaluations: 20, initial cost: 1.7812e-03, final cost 1.5585e-09, first-order optimality 1.04e-08. ### Scipy optimization took:2.5262434 seconds Option Volatility and Binomial Model In my previous post Options and Volatility Smile , we used Black-Scholes formula to derive Implied Volatility from given option strike and tenor. Most of the options traded on exchanges are American (with a few index options being European) and can be exercised at any time prior to expiration. Whether it is optimal to exercise an option early depends on whether the stock pays dividend or the level of interest rates and is a very complex subject. What I want to focus on is using Binomial model to price an American option. I summarize below Binomial model theory from the excellent book Derivative Markets 3rd Ed. by Robert McDonald There are many flavors of the Binomial model but they all have following steps in common: • Simulate future prices of underlying stock at various points in time until expiry • Calculate the payoff of the option at expiry • Discount the payoff back to today to get the option price 1. Simulate future prices of the underlying Assume the price of a stock is$S_0$and over a small time period$\Delta t$, the price could either be$S_0u$or$S_0d$where u is a factor by which price rises and d is a factor by which price drops. The stock is assumed to follow a random walk, also assume that$p$is the probability of the stock price rising and$(1-p)$is the probability of it falling. There are many ways to approach the values of$u$,$d$and$p$and the various Binomial models differ in the ways these three parameters are calculated. In Cox-Ross-Rubinstein (CRR) model,$u = \frac{1}{d}$is assumed. Since we have 3 unknowns, 2 more equations are needed and those come from risk neutral pricing assumption. Over a small$\Delta t$the expected return of the stock is $$pu + (1-p)d = e^{r \Delta t}$$ and the expected variance of the returns is $$pu^2 + (1-p)d^2 – (e^{r \Delta t})^2 = \sigma ^2 \Delta t$$ Solving for$u$,$d$and$p$, we get $$p = \frac{e^{r\Delta t} – d}{u-d}$$ $$u = e^{\sigma \sqrt{\Delta t}}$$ $$d = e^{-\sigma\sqrt{\Delta t}}$$ The CRR model generates a following tree as we simulate multi step stock price movements, this a recombining tree centered around$S_0$. 2. Calculating payoffs at expiry In this step, we calculate the payoff at each node that corresponds to expiry. For a put option,$payoff = max(K – S_N, 0)$For a call option,$payoff = max(S_N – K, 0)$where$N$is node at expiry with a stock price$S_N$and$K$is the strike. 3. Discounting the payoffs In this step, we discount the payoffs at expiry back to today using backward induction where we start at expiry node and step backwards through time calculating option value at each node of the tree. For American put,$V_n = max(K – S_n, e^{-r \Delta t} (p V_u + (1-p) V_d))$For American call,$V_n = max(S_n – K, e^{-r \Delta t} (p V_u + (1-p) V_d))V_n$is the option value at node n$S_n$is the stock price at node n$r$is risk free interest rate$\Delta t$is time step$V_u$is the option value from the upper node at n+1$V_d$is the option value from the lower node at n+1 All the variants of Binomial model, including CRR, converge to Black-Scholes in the limit$\Delta t \to 0$but the rate of convergence is different. The variant of Binomial model that I would like to use is called Leisen-Reimer Model which converges much faster. Please see the original paper for formulas and a C++ implementation at Volopta.com which I have ported to Python in the next section. The python code is going to look very similar to Options and Volatility Smile post except that we will swap out Black-Scholes framework with Leisen-Reimer model. We will also use the same option chain data AAPL_BBG_vols.csv A note on Python code I usually do not write code like below, I am purposely avoiding using any classes so that the focus remains on the objective which is to understand the technique. Options and Volatility Smile An equity option represents the right to buy (“call” option) or sell (“put” option) a unit of underlying stock at a pre-specified price (strike) at a predetermined maturity date (European option) or at any time up to the predetermined date (American option). Option writer sells an option and option holder buys an option. For a European call option on an index with strike 8,000 and index level of 8200 at maturity, the option holder receives the difference of$200 from option writer. This is called the instrinsic value or payoff of the option from the holder’s point of view.

The payoff function for a call option is $$h_{T}(S,K) = max[S_{T}-K, 0] \tag{Eq. 1}$$

where T = maturity date, $\ S_T$ is the index level at maturity and K is the strike price.

In-the-money: a call (put) is in-the-money when S > K (S < K)
At-the-money: call or put is at-the-money when $\ S \approx K$
Out-of-the-money: a call is out-of-the-money when S < K (S > K)

A fair present value (is different than payoff) of a European call option is given by Black-Scholes formula:

$\ C_{0}^{*} = C^{BSM}(S_{0},K,T,r,\sigma) \tag{Eq. 2}$

$\S_{0}$ current index level (spot)
K strike price of the option
T time-to-maturity of the option
r risk-free short rate
$\ \sigma$ volatility or the std dev of the index returns

$\ C^{BSM} = S_{t} . N(d_{1}) – e^{r(T-t)} . K. N(d_{2})\tag{Eq. 3}$

where

$\displaystyle N(d) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{d} e^{-\frac{1}{2}x^{2}} dx$

$\displaystyle d1 = \frac{\log\frac{S_{t}}{K} + (r + \frac{\sigma^2}{2})(T-t)}{\sigma\sqrt{T-t}}$

$\displaystyle d2 = \frac{\log\frac{S_{t}}{K} + (r – \frac{\sigma^2}{2})(T-t)}{\sigma\sqrt{T-t}}$

The present value of the option is always higher than the undiscounted payoff, the difference being the time value. In other words, the option’s present value is composed of payoff plus the time value. Time value indicates that there is always a chance of option going in-the-money or more in-the-money during that time.

Simulating Returns

The Geometric Brownian motion model of the BS equation is given by

$$\displaystyle dS_{t} = rS_{t}dt + \sigma S_{t} dt dZ_{t}\tag{Eq.4}$$

The discretized version is

$$\displaystyle S_{t} = S_{t – \Delta t} e^{(r – \frac{1}{2}\sigma^2) \Delta t + \sigma \sqrt{\Delta t} z_{t}}\tag{Eq.5}$$

where t $\in {(\Delta t, 2\Delta t,…..,T)}$

Using the above discretized version, we will simulate the spot prices with $S_{0}$=100, T=10, r = 0.05 and $\sigma$=0.2

Implied Volatility is the value of $\sigma$ that solves Eq. 2 given the option market quote $C_{0}^{*}$

Volatility surface is the plot of the implied volatilities for different option strikes and different option maturities on the same underlying (an option chain).

Vol Surfaces exhibit :
Smiles: option implied volatilities exhibit a smile form, i.e. for calls the OTM implied volatilities are higher than the ATM ones; sometimes they rise again for ITM options
term structure:smiles are more pronounced for short-term options than for longer-term options; a phenomenon sometimes called volatility term structure

To demonstrate Vol Surface, I will use an option chain on AAPL stock as of 5/11/2017. I have downloaded this data from a reputable vendor, you can find this file here AAPL_BBG_vols

Once we have the implied volatilities, we will generate a grid of strikes and maturities and use Cubic interpolation to derive the missing implied volatilities needed for a smooth surface.