We decided in June to begin the process of tuning our platform to automatically Extract and Normalize the Effective Tax Rate Reconciliation. We did this because we know many of our clients are interested in testing hypotheses about the factors affecting the Effective Tax Rate and certainly this is an important issue as Congress contemplates significant changes to the tax code.
We have made significant progress – most of the credit goes to Manish Pokhrel who is probably pulling his hair out right now because he probably just discovered another exception to what we originally imagined as a smooth and easy process.
I want to caution you that we are still not at the same level of perfection we are with some of the other artifacts we process but we are really close. Today at 4:07 ENANTA PHARMACEUTICALS filed their latest 10-K. Here is what the ETR tabled looked like in their filing:
Soon after we grabbed the filing and after much angst, hand-wringing and some black-magic our system made the raw data available to our clients for immediate use in their research and analysis in the following form.
The amazing thing I think is that in addition to this latest filing – we have every ETR table ENANTA has filed since they began filing 10-Ks in 2013.
That’s right – our pre-parsing and normalization tools have made available over two million observations of various data items from EDGAR filings. This is a huge number. It sort of blew me away when I received a message this morning from one of our team who is in the process of provisioning new storage space. He was running a test involving just 10-K artifacts and had logged more than 1.7 million items. Combining that with our PROXY extracted artifacts that total jumps to more than two million.
We are having to provision new space and work on a new delivery architecture because our existing system has out-grown both the ability to manage the volume of incoming artifacts as well as the number of outgoing requests. We are getting ready to add insider trading data to match to the Named Executive Officers and the Directors. That addition alone will add approximately another million items. We have also been parsing the older 10-Ks into item number sections to conform to our existing availability of the newer ones – that process should add at least another million separate files for download.
Once our new storage space is provisioned and working we will then turn back to finishing a new version of the Search, Extraction and Normalization Engine. One of the key goals of this project will be to improve your download speed by a factor of at least four. Our existing architecture did not allow for parallel access to our data repository. Our new platform will allow us to design the application to run multiple simultaneous connections between your desktop and our repository for data access.
I had an email from a faculty researcher how needed to capture the frequency of board meetings for a sample of companies – they wanted some help. I was setting up the system and decided that this would be a worthwhile post to help illustrate why Search is not enough – you need Extraction and Normalization.
While there are many ways the concept of reporting the frequency of board meetings can be expressed – I know from past review that one form of the expression is ‘The Board met N times in YYYY’. So that was the basis of my first search;
We found 796 relevant documents – now to EXTRACT & NORMALIZE those findings. Just select the ContextNormalization feature from the menu and specify the inputs:
After pressing the Okay button the results will soon be available in the Output Folder. The results include enough details to create an audit trail back to the original document and they also have the data that is needed:
I highlighted three of the rows to drive home the point that this is a versatile tool. It can work with various forms of number expressions.
From start to finish this took me about three minutes. The hard part of this is to continue and find the other ways this concept will be expressed. I tried another form Board held. This returned more results:
I would use the same strategy as before to Extract and Normalize – here is a peek at the results:
Again – a user intent on capturing the meeting frequency of a large sample is going to have to learn how the concept is expressed (clearly there are other ways to express this concept) and continue with the alternatives until they have identified the forms of expression in their sample. However, once they use their knowledge our tools can help them very rapidly convert those search results into data.
We Extract and Normalize the Executive and Director compensation data whether it is reported in the 10-K or the DEF 14A. Compensation is a required disclosure in the 10-K but companies can take the relief offered by the CFR and chose to incorporate it by reference to the DEF 14A (proxy) if the proxy is expected to be filed within what we think is a 90 day window following the filing deadline for the 10-K.
We have started seeing more and more discrepancies between what is filed in the 10-K and what is ultimately reported in the proxy. These discrepancies are not usually very large but they are interesting. Argos Therapeutics Inc filed their proxy today – here is a link. On May 1, 2017 they filed an amendment to their 10-K (10-K/A) which appears to have been filed solely to include Items 10 – 14.
When their filing was made today we captured the EC data and our system triggered an event because the table in the proxy covered the same reporting periods as the table in the 10-K/A and the totals did not match. Here is the data that was reported in the 10-K/A:
Here is the EC data as it was reported in the proxy:
The total for each year has changed it it appears that the differences can be explained by differences reported for Other. So looking more closely at the description of the Other amount it appears that they modified their description between the two filing. Here is the language used in the proxy:
The description of Other in the 10-K/A does not mention 401(K) matching contributions – otherwise it matches verbatim the language used in the DEF 14A. Now that we can explain the discrepancy we will remove the prior data and then update with the new table.
Today at 10:59 ET Brown-Foreman (CIK – 14693) filed their proxy. I was particularly happy because we had a call scheduled at 11:30 with a potential new client and I find it keeps their attention when we can demonstrate to them an example of our processes working with real-time filings for companies that they are familiar with. Given Brown-Forman’s status as a Fortune-1000 company I was chortling to myself that this was perfect.
Unfortunately it was not as perfect as I would have liked. Soon after the filing was made I received a notification that there was a TOTAL error in the Executive Compensation table. Here is the original table.
We run several validation tests on the data as we are parsing the document – one of those is of course that the totals tie with the reported total. There are a variety of edge cases where the total may not match because of a small mistake by the registrant. One example we see is that a registrant may accidentally insert a period instead of a comma. We flag all math errors for human intervention.
In this case we can’t identify the reason for the error. The components of the 2017 compensation reported for Mr. McCallum sum up to 2,518,874. The reported total is 2,578,874.
Our next step is to scrutinize the filing to see if there is some discussion of the $60,000 amount. We couldn’t find anything so now we send off an email to their Investor Relations Department and wait for a response. If we don’t get a response within a week we will push the table as is.
These kind of addition errors happen infrequently – but then we discovered another one about half an hour later when ALJ Regional Holdings (CIK – 1438731) filed their proxy while I was on the call with the client. In this case the error was present in two years of data (2016 and 2015).
The interesting thing is that when they filed their proxy last year – the reported total for Mr. Reisch was $785,250 – which is the sum of the reported components. However, they are now reporting $804,000 as the total for 2015. Which document is correct? As with Brown-Foreman we sent off an email and so I hope we find out soon.
The potential client was impressed that we had the infrastructure to address these issues.
One of the things we pride ourselves on is what we think is the fastest and most comprehensive delivery of Executive and Director Compensation data on the planet (a little bit of hyperbole never hurts). In an attempt to focus on that we have been working to add a modal window on our website so that a user who visits our prime website will see the most recent comp table we have processed.
I had an interesting question the other day from a visitor who wondered why we were displaying the compensation data from SPRINT in the middle of the day on 6/20 when four other issuers had reported EC data since Sprint filed their proxy at 5:00 pm on 6/19. If we were so good where was that comp data?
Here is the sequence of filings:
So when I received the query early on the 20th I had to look. We deliberately did not push out the DC/EC or Audit Fee data for those entities because of an interesting issue. MITCHAM INDUSTRIES and DOCUMENT SECURITY SYSTEMS INC both filed a 10-K a couple of months earlier and both reported that data in their 10-Ks so our system flagged the new data as \SAME CONTENT\DIFFERENT DOCUMENT.
Here is the EC table from MITCHAM INDUSTRIES as reported in their 10-K on 5/31:
Here is the EC table from their Proxy filed on 6/20:
Here is the data from their 10-K after we normalized it and made it available on 5/31:
Further FUNDVANTAGE TRUST’s and CHROMADEX CORP’s DEF 14A related to a Special Meeting and did not include summary compensation data. It was not until SPEEDEMISSIONS INC filed their proxy at 10:20 (CT) before we had new data.
We ultimately do not replace the existing data with the new data if it matches the content from a previous filing.
While it would be silly of me to claim we are perfect, I did dodge that bullet as we did have the most timely compensation data available.
When I was working on the last post I realized I have never shown our Search History feature. I use this all the time as it saves me significant time and effort when working with a complex search that I am refining.
The Search History is available from the Search menu under the (wait for this) History tab. It pulls up all of the Search History from your local user id since the last time you cleared the history. When you select Search History the Search History control appears:
You can modify any of the fields in this control, or you can select the Open Search button and all of the relevant fields of the application will be populated with these values – you can then modify as needed.