Pulling date stamps API v HS


#1

NOT A PROGRAMMER, JUST WORKING WITH ONE :slight_smile:

We are looking at our API data from HubSpot, specifically this: became_on_mql_date_value on the company level, which is a triggered date stamp. Part of the output found is pictures. (It was produced by a simple GET https://api.hubapi.com/companies/v2/companies/recent/createdcall. If you just call this, you’ll be able to see that in the output JSON there’s lots of companies that do not even have that field)

However, this output is only a fraction (23 out of 84, all from the last 2 months) of what is shown by HS.

Could you please explain why there is such a big difference in the number of companies pulled from the API and those reported in HS?


#2

Hi @AngelsScarselli

Looking at the data for your companies, I see 97 companies with a value set for the became_on_mql_date property, 11 of which have an empty string "" set for the value, which is where the 86 would come from if you’re looking at the property analytics.

Is the output that you’re looking at only looking at companies created in the last two months? Some of the companies had that property set back in March, so you may be missing some of the companies with that value if you’re not pulling all of your companies.


#3

Hi @dadams,

Thanks for the response. The output we’re looking at should be a lifetime output regardless of date created or stamped. When we use this call: GET https://api.hubapi.com/companies/v2/companies/recent/created, we get 250 rows with only 26 that have the time stamp completed.

Can you tell me where you are getting your numbers? Was it pulled through API or a search through HS views? Thanks for your help!


#4

That endpoint has a hard limit of 250 records that it can return in a single request. In order to get all of your companies, you’d need to page through the results using the offset= parameter.

Each request will return an offset value in the response, and you’d use that offset value in the next request URL to get the next set of records.


#5

Thanks so much @dadams, your suggestions helped us find a solution!