REST vs RPC vs HTTP vs TCP vs UDP: Understanding the Differences
REST, RPC, HTTP, TCP, and UDP—each operates at different levels of abstraction and serves different purposes in network communication:
📦 1. TCP (Transmission Control Protocol)
Type: Transport Layer Protocol (OSI Layer 4)
Purpose: Reliable, ordered, and error-checked delivery of data between applications
Use Cases: Web (HTTP), Email (SMTP), FTP
Key Features:
Connection-oriented
Guarantees packet delivery
Slower due to overhead (acknowledgements, retransmission, flow control)
💨 2. UDP (User Datagram Protocol)
Type: Transport Layer Protocol (OSI Layer 4)
Purpose: Fast, connectionless communication
Use Cases: Video streaming, online gaming, DNS, VoIP
Key Features:
No guarantee of delivery or order
No connection setup — lightweight and fast
Suitable for latency-sensitive apps
🌐 3. HTTP (Hypertext Transfer Protocol)
Type: Application Layer Protocol (built on TCP)
Purpose: Transmit hypermedia (HTML, JSON, etc.) between clients and servers
Use Cases: Web APIs, browsers, REST APIs
Key Features:
Stateless, request-response protocol
Typically runs on port 80 (HTTP) or 443 (HTTPS)
Built on top of TCP
Note: HTTP is often used as the transport layer for both REST and RPC.
🔁 4. RPC (Remote Procedure Call)
Type: Programming concept / communication pattern
Purpose: Execute a function/procedure on a remote server as if it's local
Use Cases: gRPC, Thrift, XML-RPC, JSON-RPC
Key Features:
Client invokes remote methods directly
Abstracts transport layer details
Can be tightly coupled (harder to evolve over time)
Note: gRPC uses protobuf for data serialisation which serialise data into binary format. Protobuf alone can be used with HTTP as a replacement of JSON data.
🌱 5. REST (Representational State Transfer)
Type: Architectural style using HTTP
Purpose: Build scalable and loosely-coupled web APIs
Use Cases: Public APIs, microservices communication
Key Features:
Resource-based (
GET /users/1,POST /orders)Stateless and cacheable
Uses HTTP verbs (GET, POST, PUT, DELETE)
🧠 Summary Comparison Table
| Feature | TCP | UDP | HTTP | RPC | REST |
| Layer | Transport | Transport | App | App concept | App concept |
| Reliability | Yes | No | Yes | Depends | Yes |
| Protocol Style | Stream | Datagram | Request/Response | Function Call | Resource-based |
| Transport Used | N/A | N/A | TCP | TCP/HTTP/Custom | HTTP |
| Speed | Moderate | Fast | Moderate | Fast | Moderate |
| Use Case | Raw data | Real-time | Web APIs | Microservices | Web APIs |
🤔 In Practice
TCP vs UDP = how data is transferred
HTTP = how clients/servers communicate over the web
REST vs RPC = how APIs are designed
REST over HTTP is a common web API pattern
RPC can be over HTTP (e.g., gRPC with HTTP/2), or directly on TCP
Choosing Between REST and RPC
The choice between REST and RPC boils down to the needs of your application:
Choose REST if simplicity, compatibility, and resource orientation are key.
Choose RPC if performance, compact payloads, and action orientation are critical.
Case Study of LinkedIn latency optimization by 60%
LinkedIn significantly improved its latency—by up to 60%—by replacing JSON with Protocol Buffers (Protobuf) for data serialization. Here’s how they achieved this:
1. Why Did LinkedIn Replace JSON?
JSON is widely used for serialization due to its human readability and simplicity, but it has several drawbacks:
High serialization/deserialization time: JSON relies on text-based encoding, which requires expensive parsing.
Large payload sizes: JSON data is verbose due to repeated keys and lack of efficient binary encoding.
More Network Bandwidth: Json consumes more network bandwidth and thus increases the latency.
High CPU usage: Serialization and deserialization are computationally expensive, especially for large-scale distributed systems.
LinkedIn, handling billions of requests per day, faced latency issues and increased infrastructure costs due to these inefficiencies.
2. How Did Protobuf Help?
a) Compact Binary Encoding
Protobuf is a binary format, which means it requires less bandwidth and less memory for transmission compared to JSON.
JSON includes redundant key names, while Protobuf uses numeric field tags, reducing data size significantly.
b) Faster Serialization & Deserialization
JSON requires string parsing, while Protobuf directly maps to efficient binary representations, leading to faster encoding/decoding.
This improves CPU efficiency and reduces garbage collection overhead in JVM-based applications.
c) Schema Evolution Without Breaking Changes
Protobuf supports backward and forward compatibility, allowing LinkedIn to evolve APIs smoothly without impacting older clients.
JSON lacks built-in schema enforcement, increasing the risk of breaking changes.
3. Measured Performance Gains
LinkedIn observed the following improvements after switching to Protobuf:
Latency reduced by 60% (mostly due to faster serialization/deserialization).
Payload size reduced by 50-80%, leading to lower network bandwidth usage.
CPU utilization dropped, allowing better resource utilization.
4. Where Did LinkedIn Apply Protobuf?
LinkedIn initially introduced Protobuf in its Venice key-value store and later expanded it to other services such as:
Rest.li (LinkedIn's API framework)
Kafka messages for event streaming
Inter-service communication within microservices
5. Lessons for Other Companies
If your system is high-scale and latency-sensitive, switching from JSON to Protobuf can:
Improve API performance in microservices.
Reduce cloud/server costs due to lower CPU and bandwidth usage.
Enhance data consistency with schema enforcement.
However, Protobuf is not human-readable, which can make debugging harder compared to JSON. For applications requiring human interaction with APIs (e.g., REST APIs for web clients), JSON may still be preferable.



