$handler : Resource
The internal cURL handler
Generic, light-weight, low-functionality wrapper around PHP's cURL library
This Requests library should be good enough for most requests, as long as you aren't doing anything special or crazy. If you outgrow it, then you should either (1) use Guzzle, or (2) write your own requests library that has better coverage.
Granted, this should handle MOST use cases. I don't know if it handles file uploads. Theoretically, it does, but I wouldn't bank on it, and, if it doesn't, I will not expand the functionality to cover file uploads.
With this, you can easily make GET or POST requests. Set extra headers. Easily set a user-agent. Set parameters. And cache the data for later retrieval.
__construct(string $url, array $options = array('cache' => true, 'cache_ttl' => 600, 'cache_bin' => true))
Creates a request object
Currently, all the options apply to caching. So the three that are understood are:
cache is a boolean that can turn on/off caching. It is recommended that you turn it on.
cache_life is how long the cache will live. In other words, no attempts to get new data
will be made until the data saved is older than the cache life (in seconds). It defaults to
3600 (one hour).
cache_bin is the sub-directory in the workflow's cache folder where the results are saved.
cache_bin is set to
false while caching is turned on, then all the results will be saved
directly into the workflow's cache directory.
Cache files are saved as md5 hashes of the request object. So, if you change anything about the request, then it will be considered a new cache file. Data is saved to the cache only if we receive an HTTP response code less than 400.
My advice is not to touch these options and let the cache work with its default behavior.
A further note on
cache_bin: the 'cache_bin' option, if
true, will create a cache_bin
that is a directory in the cache directory named after the hostname. So if the url is
http://api.github.com/api.... then the
cache_bin will be
api.github.com, and all
cached data will be saved in that directory. Otherwise, if you pass a string, then that will
become the directory it will be saved under.
the URL to request
execute(boolean $code = false) : string|array
Executes the cURL request
If you set
true, then this function will return an associative array as:
[ 'code' => HTTP_RESPONSE_CODE, 'data' => RESPONSE_DATA ];
If you get cached data, then the code will be "faked" as a 302, which is appropriate.
If there is an error, then the code will be 0. So, if you manage to get expired cache
data, then the code will be 0 and there will be data. If there is no expired cache data,
then you will receive an array of
[ 0, false ].
This method does not cache data unless the response code is less than 400. If you need better data integrity than that, use Guzzle or write your own request library. Or improve this one by putting in a pull request on the Github repo.
whether or not to return an HTTP response code
the response data, or an array with the code
set_auth(string $username, string $password)
Sets basic authorization for a cURL request
If you need more advanced authorization methods, and if you cannot make them happen with headers, then use a different library. I recommend Guzzle.
clear_cache(string|boolean $bin = false) : null
Clears a cache bin
Call the file with no arguments if you aren't using a cache bin; however, this will choke on sub-directories.
the name of the cache bin (or a URL if you're setting them automatically)
when encountering a sub-directory
get_cached_data( $ignore_life = false) : string|boolean
Gets cached data
This method first checks if the cache file exists. If
$ignore_life is true,
then it will return the data without checking the life. Otherwise, we'll check
to make sure that the
$cache_life is set. Next, we check the age of the cache.
If any of these fail, then we return false, which indicates we should get new
data. Otherwise, we retrieve the cache.
the data saved in the cache or false