How to Read Big Files with PHP

It’s not often that we, as PHP developers, need to worry about memory management. The PHP engine does a stellar job of cleaning up after us, and the web server model of short-lived execution contexts means even the sloppiest code has no long-lasting effects.

Measuring Success

The only way to be sure we’re making any improvement to our code is to measure a bad situation and then compare that measurement to another after we’ve applied our fix. In other words, unless we know how much a “solution” helps us (if at all), we can’t know if it really is a solution or not.

There are two metrics we can care about. The first is CPU usage. How fast or slow is the process we want to work on? The second is memory usage. How much memory does the script take to execute? These are often inversely proportional — meaning that we can offload memory usage at the cost of CPU usage, and vice versa.

In an asynchronous execution model (like with multi-process or multi-threaded PHP applications), both CPU and memory usage are important considerations. In traditional PHP architecture, these generally become a problem when either one reaches the limits of the server.

It’s impractical to measure CPU usage inside PHP. If that’s the area you want to focus on, consider using something like top, on Ubuntu or macOS. For Windows, consider using the Linux Subsystem, so you can use top in Ubuntu.

For the purposes of this tutorial, we’re going to measure memory usage. We’ll look at how much memory is used in “traditional” scripts. We’ll implement a couple of optimization strategies and measure those too. In the end, I want you to be able to make an educated choice.

The methods we’ll use to see how much memory is used are:

// formatBytes is taken from the php.net documentation

memory_get_peak_usage();

function formatBytes($bytes, $precision = 2) {
    $units = array("b", "kb", "mb", "gb", "tb");

    $bytes = max($bytes, 0);
    $pow = floor(($bytes ? log($bytes) : 0) / log(1024));
    $pow = min($pow, count($units) - 1);

    $bytes /= (1 << (10 * $pow));

    return round($bytes, $precision) . " " . $units[$pow];
}

We’ll use these functions at the end of our scripts, so we can see which script uses the most memory at one time.

What Are Our Options?

There are many approaches we could take to read files efficiently. But there are also two likely scenarios in which we could use them. We could want to read and process data all at the same time, outputting the processed data or performing other actions based on what we read. We could also want to transform a stream of data without ever really needing access to the data.

Let’s imagine, for the first scenario, that we want to be able to read a file and create separate queued processing jobs every 10,000 lines. We’d need to keep at least 10,000 lines in memory, and pass them along to the queued job manager (whatever form that may take).

For the second scenario, let’s imagine we want to compress the contents of a particularly large API response. We don’t care what it says, but we need to make sure it’s backed up in a compressed form.

In both scenarios, we need to read large files. In the first, we need to know what the data is. In the second, we don’t care what the data is. Let’s explore these options…

Reading Files, Line By Line

There are many functions for working with files. Let’s combine a few into a naive file reader:

// from memory.php

function formatBytes($bytes, $precision = 2) {
    $units = array("b", "kb", "mb", "gb", "tb");

    $bytes = max($bytes, 0);
    $pow = floor(($bytes ? log($bytes) : 0) / log(1024));
    $pow = min($pow, count($units) - 1);

    $bytes /= (1 << (10 * $pow));

    return round($bytes, $precision) . " " . $units[$pow];
}

print formatBytes(memory_get_peak_usage());
// from reading-files-line-by-line-1.php

function readTheFile($path) {
    $lines = [];
    $handle = fopen($path, "r");

    while(!feof($handle)) {
        $lines[] = trim(fgets($handle));
    }

    fclose($handle);
    return $lines;
}

readTheFile("shakespeare.txt");

require "memory.php";

We’re reading a text file containing the complete works of Shakespeare. The text file is about 5.5MB, and the peak memory usage is 12.8MB. Now, let’s use a generator to read each line:

// from reading-files-line-by-line-2.php

function readTheFile($path) {
    $handle = fopen($path, "r");

    while(!feof($handle)) {
        yield trim(fgets($handle));
    }

    fclose($handle);
}

readTheFile("shakespeare.txt");

require "memory.php";

The text file is the same size, but the peak memory usage is 393KB. This doesn’t mean anything until we do something with the data we’re reading. Perhaps we can split the document into chunks whenever we see two blank lines. Something like this:

// from reading-files-line-by-line-3.php

$iterator = readTheFile("shakespeare.txt");

$buffer = "";

foreach ($iterator as $iteration) {
    preg_match("/\n{3}/", $buffer, $matches);

    if (count($matches)) {
        print ".";
        $buffer = "";
    } else {
        $buffer .= $iteration . PHP_EOL;
    }
}

require "memory.php";

Any guesses how much memory we’re using now? Would it surprise you to know that, even though we split the text document up into 1,216 chunks, we still only use 459KB of memory? Given the nature of generators, the most memory we’ll use is that which we need to store the largest text chunk in an iteration. In this case, the largest chunk is 101,985 characters.

I’ve already written about the performance boosts of using generators and Nikita Popov’s Iterator library, so go check that out if you’d like to see more!

Generators have other uses, but this one is demonstrably good for performant reading of large files. If we need to work on the data, generators are probably the best way.

Piping Between Files

In situations where we don’t need to operate on the data, we can pass file data from one file to another. This is commonly called piping (presumably because we don’t see what’s inside a pipe except at each end … as long as it’s opaque, of course!). We can achieve this by using stream methods. Let’s first write a script to transfer from one file to another, so that we can measure the memory usage:

// from piping-files-1.php

file_put_contents(
    "piping-files-1.txt", file_get_contents("shakespeare.txt")
);

require "memory.php";

Unsurprisingly, this script uses slightly more memory to run than the text file it copies. That’s because it has to read (and keep) the file contents in memory until it has written to the new file. For small files, that may be okay. When we start to use bigger files, no so much…

Let’s try streaming (or piping) from one file to another:

// from piping-files-2.php

$handle1 = fopen("shakespeare.txt", "r");
$handle2 = fopen("piping-files-2.txt", "w");

stream_copy_to_stream($handle1, $handle2);

fclose($handle1);
fclose($handle2);

require "memory.php";

This code is slightly strange. We open handles to both files, the first in read mode and the second in write mode. Then we copy from the first into the second. We finish by closing both files again. It may surprise you to know that the memory used is 393KB.

That seems familiar. Isn’t that what the generator code used to store when reading each line? That’s because the second argument to fgets specifies how many bytes of each line to read (and defaults to -1 or until it reaches a new line).

The third argument to stream_copy_to_stream is exactly the same sort of parameter (with exactly the same default). stream_copy_to_stream is reading from one stream, one line at a time, and writing it to the other stream. It skips the part where the generator yields a value, since we don’t need to work with that value.

Piping this text isn’t useful to us, so let’s think of other examples which might be. Suppose we wanted to output an image from our CDN, as a sort of redirected application route. We could illustrate it with code resembling the following:

// from piping-files-3.php

file_put_contents(
    "piping-files-3.jpeg", file_get_contents(
        "https://github.com/assertchris/uploads/raw/master/rick.jpg"
    )
);

// ...or write this straight to stdout, if we don't need the memory info

require "memory.php";

Imagine an application route brought us to this code. But instead of serving up a file from the local file system, we want to get it from a CDN. We may substitute file_get_contents for something more elegant (like Guzzle), but under the hood it’s much the same.

The memory usage (for this image) is around 581KB. Now, how about we try to stream this instead?

// from piping-files-4.php

$handle1 = fopen(
    "https://github.com/assertchris/uploads/raw/master/rick.jpg", "r"
);

$handle2 = fopen(
    "piping-files-4.jpeg", "w"
);

// ...or write this straight to stdout, if we don't need the memory info

stream_copy_to_stream($handle1, $handle2);

fclose($handle1);
fclose($handle2);

require "memory.php";

The memory usage is slightly less (at 400KB), but the result is the same. If we didn’t need the memory information, we could just as well print to standard output. In fact, PHP provides a simple way to do this:

$handle1 = fopen(
    "https://github.com/assertchris/uploads/raw/master/rick.jpg", "r"
);

$handle2 = fopen(
    "php://stdout", "w"
);

stream_copy_to_stream($handle1, $handle2);

fclose($handle1);
fclose($handle2);

// require "memory.php";

Other Streams

There are a few other streams we could pipe and/or write to and/or read from:

  • php://stdin (read-only)
  • php://stderr (write-only, like php://stdout)
  • php://input (read-only) which gives us access to the raw request body
  • php://output (write-only) which lets us write to an output buffer
  • php://memory and php://temp (read-write) are places we can store data temporarily. The difference is that php://temp will store the data in the file system once it becomes large enough, while php://memory will keep storing in memory until that runs out.

Filters

There’s another trick we can use with streams called filters. They’re a kind of in-between step, providing a tiny bit of control over the stream data without exposing it to us. Imagine we wanted to compress our shakespeare.txt. We might use the Zip extension:

// from filters-1.php

$zip = new ZipArchive();
$filename = "filters-1.zip";

$zip->open($filename, ZipArchive::CREATE);
$zip->addFromString("shakespeare.txt", file_get_contents("shakespeare.txt"));
$zip->close();

require "memory.php";

This is a neat bit of code, but it clocks in at around 10.75MB. We can do better, with filters:

// from filters-2.php

$handle1 = fopen(
    "php://filter/zlib.deflate/resource=shakespeare.txt", "r"
);

$handle2 = fopen(
    "filters-2.deflated", "w"
);

stream_copy_to_stream($handle1, $handle2);

fclose($handle1);
fclose($handle2);

require "memory.php";

Here, we can see the php://filter/zlib.deflate filter, which reads and compresses the contents of a resource. We can then pipe this compressed data into another file. This only uses 896KB.

I know this is not the same format, or that there are upsides to making a zip archive. You have to wonder though: if you could choose the different format and save 12 times the memory, wouldn’t you?

To uncompress the data, we can run the deflated file back through another zlib filter:

// from filters-2.php

file_get_contents(
    "php://filter/zlib.inflate/resource=filters-2.deflated"
);

Streams have been extensively covered in “Understanding Streams in PHP” and “Using PHP Streams Effectively”. If you’d like a different perspective, check those out!

Customizing Streams

fopen and file_get_contents have their own set of default options, but these are completely customizable. To define them, we need to create a new stream context:

// from creating-contexts-1.php

$data = join("&", [
    "twitter=assertchris",
]);

$headers = join("\r\n", [
    "Content-type: application/x-www-form-urlencoded",
    "Content-length: " . strlen($data),
]);

$options = [
    "http" => [
        "method" => "POST",
        "header"=> $headers,
        "content" => $data,
    ],
];

$context = stream_content_create($options);

$handle = fopen("https://example.com/register", "r", false, $context);
$response = stream_get_contents($handle);

fclose($handle);

In this example, we’re trying to make a POST request to an API. The API endpoint is secure, but we still need to use the http context property (as is used for http and https). We set a few headers and open a file handle to the API. We can open the handle as read-only since the context takes care of the writing.

There are loads of things we can customize, so it’s best to check out the documentation if you want to know more.

Making Custom Protocols and Filters

Before we wrap things up, let’s talk about making custom protocols. If you look at the documentation, you can find an example class to implement:

Protocol {
    public resource $context;
    public __construct ( void )
    public __destruct ( void )
    public bool dir_closedir ( void )
    public bool dir_opendir ( string $path , int $options )
    public string dir_readdir ( void )
    public bool dir_rewinddir ( void )
    public bool mkdir ( string $path , int $mode , int $options )
    public bool rename ( string $path_from , string $path_to )
    public bool rmdir ( string $path , int $options )
    public resource stream_cast ( int $cast_as )
    public void stream_close ( void )
    public bool stream_eof ( void )
    public bool stream_flush ( void )
    public bool stream_lock ( int $operation )
    public bool stream_metadata ( string $path , int $option , mixed $value )
    public bool stream_open ( string $path , string $mode , int $options ,
        string &$opened_path )
    public string stream_read ( int $count )
    public bool stream_seek ( int $offset , int $whence = SEEK_SET )
    public bool stream_set_option ( int $option , int $arg1 , int $arg2 )
    public array stream_stat ( void )
    public int stream_tell ( void )
    public bool stream_truncate ( int $new_size )
    public int stream_write ( string $data )
    public bool unlink ( string $path )
    public array url_stat ( string $path , int $flags )
}

We’re not going to implement one of these, since I think it is deserving of its own tutorial. There’s a lot of work that needs to be done. But once that work is done, we can register our stream wrapper quite easily:

if (in_array("highlight-names", stream_get_wrappers())) {
    stream_wrapper_unregister("highlight-names");
}

stream_wrapper_register("highlight-names", "HighlightNamesProtocol");

$highlighted = file_get_contents("highlight-names://story.txt");

Similarly, it’s also possible to create custom stream filters. The documentation has an example filter class:

Filter {
    public $filtername;
    public $params
    public int filter ( resource $in , resource $out , int &$consumed ,
        bool $closing )
    public void onClose ( void )
    public bool onCreate ( void )
}

This can be registered just as easily:

$handle = fopen("story.txt", "w+");
stream_filter_append($handle, "highlight-names", STREAM_FILTER_READ);

highlight-names needs to match the filtername property of the new filter class. It’s also possible to use custom filters in a php://filter/highligh-names/resource=story.txt string. It’s much easier to define filters than it is to define protocols. One reason for this is that protocols need to handle directory operations, whereas filters only need to handle each chunk of data.

If you have the gumption, I strongly encourage you to experiment with creating custom protocols and filters. If you can apply filters to stream_copy_to_streamoperations, your applications are going to use next to no memory even when working with obscenely large files. Imagine writing a resize-image filter or and encrypt-for-application filter.

Summary

Though this isn’t a problem we frequently suffer from, it’s easy to mess up when working with large files. In asynchronous applications, it’s just as easy to bring the whole server down when we’re not careful about memory usage.

This tutorial has hopefully introduced you to a few new ideas (or refreshed your memory about them), so that you can think more about how to read and write large files efficiently. When we start to become familiar with streams and generators, and stop using functions like file_get_contents: an entire category of errors disappear from our applications. That seems like a good thing to aim for!

original text:https://www.sitepoint.com/performant-reading-big-files-php/

nginx 全局变量

变量 说明
$args 请求中的参数,如www.123.com/1.php?a=1&b=2的$args就是a=1&b=2
$content_length HTTP请求信息里的”Content-Length”
$conten_type HTTP请求信息里的”Content-Type”
$document_root nginx虚拟主机配置文件中的root参数对应的值
$document_uri 当前请求中不包含指令的URI,如www.123.com/1.php?a=1&b=2的$document_uri就是1.php,不包含后面的参数
$host 主机头,也就是域名
$http_user_agent 客户端的详细信息,也就是浏览器的标识,用curl -A可以指定
$http_cookie 客户端的cookie信息
$limit_rate 如果nginx服务器使用limit_rate配置了显示网络速率,则会显示,如果没有设置, 则显示0
$remote_addr 客户端的公网ip
$remote_port 客户端的port
$remote_user 如果nginx有配置认证,该变量代表客户端认证的用户名
$request_body_file 做反向代理时发给后端服务器的本地资源的名称
$request_method 请求资源的方式,GET/PUT/DELETE等
$request_filename
1
当前请求的资源文件的路径名称,相当于是$document_root/$document_uri 的组合
$request_uri
1
请求的链接,包括 $document_uri 和 $args
$scheme 请求的协议,如ftp,http,https
$server_protocol 客户端请求资源使用的协议的版本,如HTTP/1.0,HTTP/1.1,HTTP/2.0等
$server_addr 服务器IP地址
$server_name 服务器的主机名
$server_port 服务器的端口号
$uri 和$document_uri相同
$http_referer 客户端请求时的referer,通俗讲就是该请求是通过哪个链接跳过来的,用curl -e可以指定

nginx 正则匹配规则

正则表达式匹配:

= 表示精确匹配

^~ 表示uri以某个字符串开头

~ 正则匹配(区分大小写)

~* 正则匹配(不区分大小写)

!~ 区分大小写不匹配

!~* 不区分大小写不匹配

任何请求都会匹配

匹配优先级:

= > ^~ >  /

匹配符号标识:

^ 匹配字符串的开始位置

$ 匹配字符串的结束位置

.* .匹配任意字符,*匹配数量0到正无穷

\. 斜杠用来转义,\.匹配 .    特殊使用方法,记住记性了

(值1|值2|值3|值4) 或匹配模式,例:(jpg|gif|png|bmp)匹配jpg或gif或png或bmp

i 不区分大小写

文件及目录匹配判断:

-f、!-f:判断指定的路径是否为存在且为文件

-d、!-d:判断指定的路径是否为存在且为目录

-e、!-e:判断指定的路径是否存在,文件或目录均可

-x、!-x:判断指定路径的文件是否存在且可执行

什么是webshell?

webshell是web入侵的脚本攻击工具

简单的说来,webshell就是一个asp或php木马后门,黑客在入侵了一个网站后,常常在将这些 asp或php木马后门文件放置在网站服务器的web目录中,与正常的网页文件混在一起。然后黑客就可以用web的方式,通过asp或php木马后门控制网站服务器,包括上传下载文件、查看数据库、执行任意程序命令等。

什么是“木马”?

“木马”全称是“特洛伊木马(Trojan Horse)”,原指古希腊士兵藏在木马内进入敌方城市从而占领敌方城市的故事。在Internet上,“特洛伊木马”指一些程序设计人员在其可从网络上下载 (Download)的应用程序或游戏中,包含了可以控制用户的计算机系统的程序,可能造成用户的系统被破坏甚至瘫痪。

什么是后门?

大家都知道,一台计算机上有65535个端口,那么如果把计算机看作是一间屋子,那么这65535个端口就可以它看做是计算机为了与外界连接所开的65535 扇门。每个门的背后都是一个服务。有的门是主人特地打开迎接客人的(提供服务),有的门是主人为了出去访问客人而开设的(访问远程服务)——理论上,剩下的其他门都该是关闭着的,但偏偏由于各种原因,很多门都是开启的。于是就有好事者进入,主人的隐私被刺探,生活被打扰,甚至屋里的东西也被搞得一片狼迹。这扇悄然被开启的门——就是“后门”。

webshell的优点

webshell 最大的优点就是可以穿越防火墙,由于与被控制的服务器或远程主机交换的数据都是通过80端口传递的,因此不会被防火墙拦截。并且使用webshell一般不会在系统日志中留下记录,只会在网站的web日志中留下一些数据提交记录,没有经验的管理员是很难看出入侵痕迹的。

RESTful规范

1. 什么是REST

REST全称是Representational State Transfer,中文意思是表述(编者注:通常译为表征)性状态转移。 它首次出现在2000年Roy Fielding的博士论文中,Roy Fielding是HTTP规范的主要编写者之一。 他在论文中提到:”我这篇文章的写作目的,就是想在符合架构原理的前提下,理解和评估以网络为基础的应用软件的架构设计,得到一个功能强、性能好、适宜通信的架构。REST指的是一组架构约束条件和原则。” 如果一个架构符合REST的约束条件和原则,我们就称它为RESTful架构。

REST本身并没有创造新的技术、组件或服务,而隐藏在RESTful背后的理念就是使用Web的现有特征和能力, 更好地使用现有Web标准中的一些准则和约束。虽然REST本身受Web技术的影响很深, 但是理论上REST架构风格并不是绑定在HTTP上,只不过目前HTTP是唯一与REST相关的实例。 所以我们这里描述的REST也是通过HTTP实现的REST。

2. 理解RESTful

要理解RESTful架构,需要理解Representational State Transfer这个词组到底是什么意思,它的每一个词都有些什么涵义。

下面我们结合REST原则,围绕资源展开讨论,从资源的定义、获取、表述、关联、状态变迁等角度,列举一些关键概念并加以解释。

  • 资源与URI
  • 统一资源接口
  • 资源的表述
  • 资源的链接
  • 状态的转移

2. 1 资源与URI

REST全称是表述性状态转移,那究竟指的是什么的表述? 其实指的就是资源。任何事物,只要有被引用到的必要,它就是一个资源。资源可以是实体(例如手机号码),也可以只是一个抽象概念(例如价值) 。下面是一些资源的例子:

  • 某用户的手机号码
  • 某用户的个人信息
  • 最多用户订购的GPRS套餐
  • 两个产品之间的依赖关系
  • 某用户可以办理的优惠套餐
  • 某手机号码的潜在价值

要让一个资源可以被识别,需要有个唯一标识,在Web中这个唯一标识就是URI(Uniform Resource Identifier)。

URI既可以看成是资源的地址,也可以看成是资源的名称。如果某些信息没有使用URI来表示,那它就不能算是一个资源, 只能算是资源的一些信息而已。URI的设计应该遵循可寻址性原则,具有自描述性,需要在形式上给人以直觉上的关联。这里以github网站为例,给出一些还算不错的URI:

  • https://github.com/git
  • https://github.com/git/git
  • https://github.com/git/git/blob/master/block-sha1/sha1.h
  • https://github.com/git/git/commit/e3af72cdafab5993d18fae056f87e1d675913d08
  • https://github.com/git/git/pulls
  • https://github.com/git/git/pulls?state=closed
  • https://github.com/git/git/compare/master…next

下面让我们来看看URI设计上的一些技巧:

  • 使用_或-来让URI可读性更好

曾经Web上的URI都是冰冷的数字或者无意义的字符串,但现在越来越多的网站使用_或-来分隔一些单词,让URI看上去更为人性化。 例如国内比较出名的开源中国社区,它上面的新闻地址就采用这种风格, 如http://www.oschina.net/news/38119/oschina-translate-reward-plan。

  • 使用/来表示资源的层级关系

例如上述/git/git/commit/e3af72cdafab5993d18fae056f87e1d675913d08就表示了一个多级的资源, 指的是git用户的git项目的某次提交记录,又例如/orders/2012/10可以用来表示2012年10月的订单记录。

  • 使用?用来过滤资源

很多人只是把?简单的当做是参数的传递,很容易造成URI过于复杂、难以理解。可以把?用于对资源的过滤, 例如/git/git/pulls用来表示git项目的所有推入请求,而/pulls?state=closed用来表示git项目中已经关闭的推入请求, 这种URL通常对应的是一些特定条件的查询结果或算法运算结果。

  • ,或;可以用来表示同级资源的关系

有时候我们需要表示同级资源的关系时,可以使用,或;来进行分割。例如哪天github可以比较某个文件在随意两次提交记录之间的差异,或许可以使用/git/git /block-sha1/sha1.h/compare/e3af72cdafab5993d18fae056f87e1d675913d08;bd63e61bdf38e872d5215c07b264dcc16e4febca作为URI。 不过,现在github是使用…来做这个事情的,例如/git/git/compare/master…next。

2. 2 统一资源接口

RESTful架构应该遵循统一接口原则,统一接口包含了一组受限的预定义的操作,不论什么样的资源,都是通过使用相同的接口进行资源的访问。接口应该使用标准的HTTP方法如GET,PUT和POST,并遵循这些方法的语义。

如果按照HTTP方法的语义来暴露资源,那么接口将会拥有安全性和幂等性的特性,例如GET和HEAD请求都是安全的, 无论请求多少次,都不会改变服务器状态。而GET、HEAD、PUT和DELETE请求都是幂等的,无论对资源操作多少次, 结果总是一样的,后面的请求并不会产生比第一次更多的影响。

下面列出了GET,DELETE,PUT和POST的典型用法:

GET

  • 安全且幂等
  • 获取表示
  • 变更时获取表示(缓存)
  • 200(OK) – 表示已在响应中发出
  • 204(无内容) – 资源有空表示
  • 301(Moved Permanently) – 资源的URI已被更新
  • 303(See Other) – 其他(如,负载均衡)
  • 304(not modified)- 资源未更改(缓存)
  • 400 (bad request)- 指代坏请求(如,参数错误)
  • 404 (not found)- 资源不存在
  • 406 (not acceptable)- 服务端不支持所需表示
  • 500 (internal server error)- 通用错误响应
  • 503 (Service Unavailable)- 服务端当前无法处理请求

POST

  • 不安全且不幂等
  • 使用服务端管理的(自动产生)的实例号创建资源
  • 创建子资源
  • 部分更新资源
  • 如果没有被修改,则不过更新资源(乐观锁)
  • 200(OK)- 如果现有资源已被更改
  • 201(created)- 如果新资源被创建
  • 202(accepted)- 已接受处理请求但尚未完成(异步处理)
  • 301(Moved Permanently)- 资源的URI被更新
  • 303(See Other)- 其他(如,负载均衡)
  • 400(bad request)- 指代坏请求
  • 404 (not found)- 资源不存在
  • 406 (not acceptable)- 服务端不支持所需表示
  • 409 (conflict)- 通用冲突
  • 412 (Precondition Failed)- 前置条件失败(如执行条件更新时的冲突)
  • 415 (unsupported media type)- 接受到的表示不受支持
  • 500 (internal server error)- 通用错误响应
  • 503 (Service Unavailable)- 服务当前无法处理请求

PUT

  • 不安全但幂等
  • 用客户端管理的实例号创建一个资源
  • 通过替换的方式更新资源
  • 如果未被修改,则更新资源(乐观锁)
  • 200 (OK)- 如果已存在资源被更改
  • 201 (created)- 如果新资源被创建
  • 301(Moved Permanently)- 资源的URI已更改
  • 303 (See Other)- 其他(如,负载均衡)
  • 400 (bad request)- 指代坏请求
  • 404 (not found)- 资源不存在
  • 406 (not acceptable)- 服务端不支持所需表示
  • 409 (conflict)- 通用冲突
  • 412 (Precondition Failed)- 前置条件失败(如执行条件更新时的冲突)
  • 415 (unsupported media type)- 接受到的表示不受支持
  • 500 (internal server error)- 通用错误响应
  • 503 (Service Unavailable)- 服务当前无法处理请求

DELETE

  • 不安全但幂等
  • 删除资源
  • 200 (OK)- 资源已被删除
  • 301 (Moved Permanently)- 资源的URI已更改
  • 303 (See Other)- 其他,如负载均衡
  • 400 (bad request)- 指代坏请求
  • 404 (not found)- 资源不存在
  • 409 (conflict)- 通用冲突
  • 500 (internal server error)- 通用错误响应
  • 503 (Service Unavailable)- 服务端当前无法处理请求

下面我们来看一些实践中常见的问题:

  • POST和PUT用于创建资源时有什么区别?

POST和PUT在创建资源的区别在于,所创建的资源的名称(URI)是否由客户端决定。 例如为我的博文增加一个java的分类,生成的路径就是分类名/categories/java,那么就可以采用PUT方法。不过很多人直接把POST、GET、PUT、DELETE直接对应上CRUD,例如在一个典型的rails实现的RESTful应用中就是这么做的。

我认为,这是因为rails默认使用服务端生成的ID作为URI的缘故,而不少人就是通过rails实践REST的,所以很容易造成这种误解。

  • 客户端不一定都支持这些HTTP方法吧?

的确有这种情况,特别是一些比较古老的基于浏览器的客户端,只能支持GET和POST两种方法。

在实践上,客户端和服务端都可能需要做一些妥协。例如rails框架就支持通过隐藏参数_method=DELETE来传递真实的请求方法, 而像Backbone这样的客户端MVC框架则允许传递_method传输和设置X-HTTP-Method-Override头来规避这个问题。

  • 统一接口是否意味着不能扩展带特殊语义的方法?

统一接口并不阻止你扩展方法,只要方法对资源的操作有着具体的、可识别的语义即可,并能够保持整个接口的统一性。

像WebDAV就对HTTP方法进行了扩展,增加了LOCK、UPLOCK等方法。而github的API则支持使用PATCH方法来进行issue的更新,例如:

PATCH /repos/:owner/:repo/issues/:number

不过,需要注意的是,像PATCH这种不是HTTP标准方法的,服务端需要考虑客户端是否能够支持的问题。

  • 统一资源接口对URI有什么指导意义?

统一资源接口要求使用标准的HTTP方法对资源进行操作,所以URI只应该来表示资源的名称,而不应该包括资源的操作。

通俗来说,URI不应该使用动作来描述。例如,下面是一些不符合统一接口要求的URI:

  • GET /getUser/1
  • POST /createUser
  • PUT /updateUser/1
  • DELETE /deleteUser/1

如果GET请求增加计数器,这是否违反安全性?

安全性不代表请求不产生副作用,例如像很多API开发平台,都对请求流量做限制。像github,就会限制没有认证的请求每小时只能请求60次。

但客户端不是为了追求副作用而发出这些GET或HEAD请求的,产生副作用是服务端”自作主张”的。

另外,服务端在设计时,也不应该让副作用太大,因为客户端认为这些请求是不会产生副作用的。

  • 直接忽视缓存可取吗?

即使你按各个动词的原本意图来使用它们,你仍可以轻易禁止缓存机制。 最简单的做法就是在你的HTTP响应里增加这样一个报头: Cache-control: no-cache。 但是,同时你也对失去了高效的缓存与再验证的支持(使用Etag等机制)。

对于客户端来说,在为一个REST式服务实现程序客户端时,也应该充分利用现有的缓存机制,以免每次都重新获取表示。

  • 响应代码的处理有必要吗?

HTTP的响应代码可用于应付不同场合,正确使用这些状态代码意味着客户端与服务器可以在一个具备较丰富语义的层次上进行沟通。

例如,201(”Created”)响应代码表明已经创建了一个新的资源,其URI在Location响应报头里。

假如你不利用HTTP状态代码丰富的应用语义,那么你将错失提高重用性、增强互操作性和提升松耦合性的机会。

如果这些所谓的RESTful应用必须通过响应实体才能给出错误信息,那么SOAP就是这样的了,它就能够满足了。

2. 3 资源的表述

上面提到,客户端通过HTTP方法可以获取资源,是吧? 不,确切的说,客户端获取的只是资源的表述而已。 资源在外界的具体呈现,可以有多种表述(或成为表现、表示)形式,在客户端和服务端之间传送的也是资源的表述,而不是资源本身。 例如文本资源可以采用html、xml、json等格式,图片可以使用PNG或JPG展现出来。

资源的表述包括数据和描述数据的元数据,例如,HTTP头”Content-Type” 就是这样一个元数据属性。

那么客户端如何知道服务端提供哪种表述形式呢?

答案是可以通过HTTP内容协商,客户端可以通过Accept头请求一种特定格式的表述,服务端则通过Content-Type告诉客户端资源的表述形式。

在URI里边带上版本号

有些API在URI里边带上版本号,例如:

  • http://api.example.com/1.0/foo
  • http://api.example.com/1.2/foo
  • http://api.example.com/2.0/foo

如果我们把版本号理解成资源的不同表述形式的话,就应该只是用一个URL,并通过Accept头部来区分,还是以github为例,它的Accept的完整格式是:application/vnd.github[.version].param[+json]

对于v3版本的话,就是Accept: application/vnd.github.v3。对于上面的例子,同理可以使用使用下面的头部:

  • Accept: vnd.example-com.foo+json; version=1.0
  • Accept: vnd.example-com.foo+json; version=1.2
  • Accept: vnd.example-com.foo+json; version=2.0

使用URI后缀来区分表述格式

像rails框架,就支持使用/users.xml或/users.json来区分不同的格式。 这样的方式对于客户端来说,无疑是更为直观,但混淆了资源的名称和资源的表述形式。 我个人认为,还是应该优先使用内容协商来区分表述格式。

如何处理不支持的表述格式

当服务器不支持所请求的表述格式,那么应该怎么办?若服务器不支持,它应该返回一个HTTP 406响应,表示拒绝处理该请求。

2. 4 资源的链接

我们知道REST是使用标准的HTTP方法来操作资源的,但仅仅因此就理解成带CURD的Web数据库架构就太过于简单了。

这种反模式忽略了一个核心概念:”超媒体即应用状态引擎(hypermedia as the engine of application state)”。 超媒体是什么?

当你浏览Web网页时,从一个连接跳到一个页面,再从另一个连接跳到另外一个页面,就是利用了超媒体的概念:把一个个把资源链接起来.

要达到这个目的,就要求在表述格式里边加入链接来引导客户端。在《RESTful Web Services》一书中,作者把这种具有链接的特性成为连通性。

2. 5 状态的转移

有了上面的铺垫,再讨论REST里边的状态转移就会很容易理解了。

不过,我们先来讨论一下REST原则中的无状态通信原则。初看一下,好像自相矛盾了,既然无状态,何来状态转移一说?

其实,这里说的无状态通信原则,并不是说客户端应用不能有状态,而是指服务端不应该保存客户端状态。

2. 5.1 应用状态与资源状态

实际上,状态应该区分应用状态和资源状态,客户端负责维护应用状态,而服务端维护资源状态。

客户端与服务端的交互必须是无状态的,并在每一次请求中包含处理该请求所需的一切信息。

服务端不需要在请求间保留应用状态,只有在接受到实际请求的时候,服务端才会关注应用状态。

这种无状态通信原则,使得服务端和中介能够理解独立的请求和响应。

在多次请求中,同一客户端也不再需要依赖于同一服务器,方便实现高可扩展和高可用性的服务端。

但有时候我们会做出违反无状态通信原则的设计,例如利用Cookie跟踪某个服务端会话状态,常见的像J2EE里边的JSESSIONID。

这意味着,浏览器随各次请求发出去的Cookie是被用于构建会话状态的。

当然,如果Cookie保存的是一些服务器不依赖于会话状态即可验证的信息(比如认证令牌),这样的Cookie也是符合REST原则的。

2. 5.2 应用状态的转移

状态转移到这里已经很好理解了, “会话”状态不是作为资源状态保存在服务端的,而是被客户端作为应用状态进行跟踪的。客户端应用状态在服务端提供的超媒体的指引下发生变迁。服务端通过超媒体告诉客户端当前状态有哪些后续状态可以进入。

这些类似”下一页”之类的链接起的就是这种推进状态的作用——指引你如何从当前状态进入下一个可能的状态。