Science Fair Project Encyclopedia
Development of HTTP was co-ordinated by the World Wide Web Consortium and working groups of the Internet Engineering Task Force, culminating in the publication of a series of RFCs, most notably RFC 2616, which defines HTTP/1.1, the version of HTTP in common use today.
HTTP is a request/response protocol between clients and servers. An HTTP client, such as a web browser, typically initiates a request by establishing a TCP/IP connection to a particular port on a remote host (port 80 by default). An HTTP server listening on that port waits for the client to send a request string, such as "GET / HTTP/1.1" (which would request the default page of that web server), followed by an email-like MIME message which has a number of informational header strings that describe aspects of the request, followed by an optional body of arbitrary data. Some headers are optional, while others (such as Host) are required by the HTTP/1.1 protocol. Upon receiving the request string (and message, if any), the server sends back a response string, such as "200 OK", and a message of its own, the body of which is perhaps the requested file, an error message, or some other information.
HTTP request methods
- GET By far the most common, for statically requesting a resource by specifying a URL.
- POST Similar to GET, except that a message body, typically containing key-value pairs from an HTML form submission, is included in the request.
- PUT Used for uploading files to a specified URI on a web-server.
- DELETE Rarely implemented, and self-explanatory.
- HEAD Identical to GET, except that the page content is not returned; just the headers are. Useful for retrieving meta-information.
- TRACE Echoes back the received request, so that a client can see what intermediate servers are adding or changing in the request.
- CONNECT Rarely implemented, for use with a proxy that can change to being an SSL tunnel.
HTTP differs from other TCP-based protocols such as FTP, in that connections are usually terminated once a particular request (or related series of requests) has been completed. This design makes HTTP ideal for the World Wide Web, where pages regularly link to pages on other servers. It can occasionally pose problems for Web designers, as the lack of a persistent connection necessitates alternative methods of maintaining users' "state". Many of these methods involve the use of "cookies".
HTTPS is the secure version of HTTP, using SSL/TLS to protect the traffic. The protocol normally uses TCP port 443. SSL, originally created to protect HTTP, is especially suited for HTTP since it can provide (some) protection even if only one side to the communication, the server, is authenticated. This is typically the case in HTTP transactions over the Internet.
The locations of HTTP (and HTTPS) pages are given as Uniform Resource Locators or URLs. This address location syntax was created for linking Web pages.
Below is a sample conversation between an HTTP client and an HTTP server running on www.google.com, port 80.
GET / HTTP/1.1 Host: www.google.com
HTTP/1.1 200 OK Content-Length: 3059 Server: GWS/2.0 Date: Sat, 11 Jan 2003 02:44:04 GMT Content-Type: text/html Cache-control: private Set-Cookie: PREF=ID=73d4aef52e57bae9:TM=1042253044:LM=1042253044:S=SMCc_HRPCQiqy X9j; expires=Sun, 17-Jan-2038 19:14:07 GMT; path=/; domain=.google.com Connection: keep-alive
(followed by a blank line and HTML text comprising the Google home page.)
In HTTP 1.0, a client sends a request to the server, the server sends a response back to the client. After this, the connection will be released. On the other hand, HTTP 1.1 supports persistent connections. This enables the client to send a request and get a response, and then send additional requests and get additional responses immediately. The TCP connection is not released for the multiple additional requests, so the relative overhead due to TCP is much less per request. It is also possible to send more than one (usually two) request before getting responses from previous requests. This technique is known as "pipelining".
- List of HTTP status codes
- 404 error
- Uniform resource locator
- Basic authentication scheme
- Digest access authentication
- Captive portal
- HTTP proxy
- Tim Berners-Lee's original 1992 Internet-Draft http://www.w3.org/Protocols/HTTP/HTTP2.html
- RFC 2616 - The current HTTP/1.1 specification
- HTTP/1.1 specification errata
- HTTP Made Really Easy
- HTTP header viewer
- List of HTTP status codes
- HTTP Sequence Diagram (PDF)
- Command-line HTTP clients: cURL, Wget
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details