Get started with Packet's networking features including using public, backend and Elastic IP space.
Network Design Overview
Packet’s network and datacenter topology is designed with performance and redundancy as top priorities. Two unique features include a full Layer-3 topology, and a native dual stack (IPv4 / IPv6) capability.
With a pure Layer-3 network design, each server is directly attached to a physical switch via either 2 x 1Gbps copper or 2 x 10Gbps SFP+ connections - providing elastic, cloud-style networking without the slow, latency-inducing characteristics often associated with overlays and Layer-2 vLans.
Each server has dedicated dual network connections going into two Top-of-Rack (ToR) switches. These two physical connections are virtually bonded together as follows:
- t1.small: 1 Gbps Network (2 × Intel NICs 1Gbps w/ TLB) full hardware redundancy but no active/active bond.
- c1.small: 2 Gbps Bonded Network (2 × Intel NICs 1Gbps w/ LACP) full hardware redundancy and active/active bond.
- x1.small: 10 Gbps Network (Intel X710 NIC)
- m1.xlarge: 20 Gbps Bonded Network (2 × Mellanox ConnectX NICs, 10Gbps w/ LACP) full hardware redundancy and active/active bond.
- c1.large.arm: 20 Gbps Bonded Network (2 × Mellanox ConnectX NICs, 10Gbps w/ LACP) full hardware redundancy and active/active bond.
- c1.xlarge: 20 Gbps Bonded Network (2 × Mellanox ConnectX NICs, 10Gbps w/ LACP) full hardware redundancy and active/active bond.
- s1.large: 20 Gbps Bonded Network ( 2 × Mellanox ConnectX NICs, 10 Gbps w/ LACP) full hardware redundancy and active/active bond.
- m2.xlarge (Intel Scalable): 20Gbps Network Pipe with 2 x 10 Gbps Bonded NICs in an HA configuration (2 x 10Gbps Mellanox Connect-X 4 NICs)
- c2.medium (AMD): 20Gbps Network Pipe with 2 x 10 Gbps Bonded NICs in an HA configuration (2 x Mellanox ConnectX NICs)
IP Space, Elastic IPs, Global Anycast IPs and Backend Networking
Default IPv4 and IPv6 Addresses - Packet servers are grouped into projects to support backend networking and ease of collaboration. When you create a new project, our platform assigns two IP blocks: a per-facility /56 IPv6 and /25 Private IPv4.
When you then deploy servers into a project, each machine gets 1 Public IPv4 from Packet’s general pool, as well as 1 Private IPv4 and 1 IPv6 from the blocks already assigned to the project. Note: since the Public IPv4 for each server is assigned from our general pool, you will lose this IPv4 if you delete your server (see elastic addressing section below for how to keep IPs around).
Elastic IPs - Elastic IPs (IPv4 only) can be requested via the portal or API, and once assigned are instantly routable to any server within your project based upon location. They cost $0.005/hr ($3.60/mo) per IP.
We recommend using Elastic IPs for any workload where permanent reachability is required since these IPs can be retained and reused even if you add or remove specific servers. For example, hosting a public-facing web site behind a load balancer, or directing clients to a clustered database server internally. Move IPs from one server to another if one fails, or move workload to a bigger (or smaller!) server to respond to traffic demands.
Also, Elastic IPs can be used as additional IPs that you can attach to a server. This is useful if you need additional IP space on a server for containerized or virtualized workloads, where every VM might need a direct connection to the public internet, without the need of NAT-ing and extra routes.
Global Anycast IPs - can be requested via the portal or API. There is a hard limit of four (4) IPs per project. You can then announce your IP space via Local BGP & utilizing bird. Global Anycast IPs can be announced through all of our core sites with Edge sites coming soon.
Each Global Anycast IP costs $0.15/hour. For example, a /31 (two usable IPs) would cost $0.30/hr. Bandwidth is also a consideration as, regular $0.05/GB outbound rates will apply. In addition, inbound bandwidth to Global Anycast IPs will be $0.03/GB.
Backend Networking -
All servers within a Project can talk to each other via private RFC1918 address space (e.g. 10.x.x.x), but cannot communicate over private address space by devices outside of that Project. This is also referred to as "backend" networking. The only restriction is that all servers must be within a single project.
Private networking works within a single datacenter, and can be extended between facilities using our Backend Transfer feature (note: bandwidth costs apply when sending traffic between facilities using Backend Transfer).
Packet's "bonded" network interfaces
The network interfaces on our servers work a little bit different than what you might be accustomed to at other data centers.
Servers are installed with a single logical interface, 'bond0', which contains three types of IP addresses:
- A public IPv4 address
- A private IPv4 address
- A public IPv6 address
We are running LACP (mode 4 in Linux bonding parlance) on our production NICs. Each server also has a dedicated "out of band" NIC for management functionality, such as IPMI and virtual console.
Using this configuration, we're able to deliver on both private networking *and* high availability, using our dual (1G or 10G) NIC hardware platform. Servers are connected to a redundant pair of edge switches, for maximum fault tolerance.
If an entire switch experiences a hardware failure or undergoes scheduled software upgrades, capacity is effectively "halved" (you'll drop down to a single 1G or 10G link), however, network connectivity will remain intact—save perhaps for a failover period lasting several seconds or less.
Likewise, all upstream backbone hardware inside the Packet data center is deployed as "active/active", in that we can lose a single device in any role without any outage.
As we are using sFlow for traffic accounting, and not conventional (SNMP) port counters, we are able to only bill customers for Internet-bound traffic, and not inter-machine traffic—even in cases where both public and private traffic passes over the same physical NICs.
Device - All Packet customers enjoy free inbound traffic, free traffic between servers in the same Packet project (within the same datacenter location). We aggressively peer with a wide variety of content providers and eyeball networks and feel that our network can appropriately be described as “premium.”
Outbound traffic to the Internet (including to other Packet data center locations is charged at the following rates the default rate of $0.05/GB.
💡TIP: Our bandwidth graphs show ingress/egress and not just the billable bandwidth.
If you have larger scale bandwidth needs, be sure to contact us for custom pricing.
Backend Transfer - backend transfer within the same facility still will not incur billable charges. Whereas transferring data to our global facilities, we have to move that traffic across our network. As such, bandwidth is billed on a usage basis at a reduced rate of $0.03/GB.
Global Anycast IPs - Traditional outbound bandwidth charge applies, which is $0.05/GB. All inbound traffic to Global Anycast IPs will be charged at $0.03/GB.
We offer a customer VPN (virtual private network) service, also called Doorman. Once the connection gets established, you can enjoy secure traffic between you and your servers for management purposes.
This is not a solution meant for web traffic because you will only be able to reach the private IPs assigned to your servers.
You can find more information regarding Doorman and how to set it up here.
Packet supports BGP (Border Gateway Protocol) as a protocol for advertising routes to Packet servers in a local environment (called Local BGP), as well as to the greater Internet (called Global BGP).
Local BGP - In a Local BGP deployment, a customer uses an internal ASN (Autonomous System Number, more here) to control routes within a single Packet data center. This means that the routes are never advertised to the global Internet and the customer does not need to pay for or maintain a registered ASN. The top use case for operating local BGP would be to perform failover or IP mobility between a collection of servers in a local datacenter.
Global BGP / Anycast - Global BGP requires a customer to use their own IP space (via a registered ASN through one of the regional registries) or tap into Packet's Global Elastic IP feature.
Pricing - BGP requests will be approved on a per-project basis:
- Global BGP - Free (required onboarding and review)
- Local BGP - Free
Packet BGP Policies
Packet builds its customer route filters using the Internet Routing Registry (IRR). Customers may provide us with an AS or AS-MACRO to use at the time of Global BGP turn-up.
We walk the IRR hourly to build our filters, however, we advise that newly-advertised routes will take up to (48) hours to be operable on a global basis, as some providers we connect with require a longer period of time to process these changes. We will automatically e-mail Global BGP customers informing of any newly-registered routes we are accepting.
With the exceptions detailed below, a /24 (IPv4) or /48 (IPv6) are generally considered the smallest sized netblocks which are globally routable on the public Internet.
Packet, at its sole discretion, may choose to not route a customer AS or prefix, if we are unable to verify that it belongs to our customer, or will be used in accordance with our Acceptable Usage Policy (AUP).
Generally speaking, we do not require a Letter of Authority to route a given prefix, provided it is registered correctly in the IRR, however, we may ask for one in certain circumstances.
Packet supports the following BGP communities on routes learned from customers.
- Function: BGP blackhole.
- Packet will blackhole traffic to your prefix, as well as signal supporting transit providers and peering partners to do the same.
- We support de-aggregation down to the /32 (IPv4) and /128 (IPv6) level with this community only.
- Function: Anycast routing
- By tagging your routes with this community, Packet will advertise your routes to only transit and peering we maintain consistently on a global basis. Regional ISPs we connect with (e.g. Verizon in the New York metro area) will not learn your routes, as advertising to them may result in "scenic" routing for customers with global Anycast configurations.
- Please note that this community is not advised for normal use, as it will limit the number of available paths/providers you have access to. It should only be deployed by customers seeking BGP anycast topology, with multiple server instances deployed in each Packet datacenter.
- NOTE: This community is in beta testing at this time.
Deliverability you can expect from the Packet IP network
The best in the business! The Packet network is built from the ground up and engineered to support large, high traffic, customer workloads. Some fun facts about Packet's deliverability:
- At the core of Packet's delivery strategy is dense interconnection. Packet's network extends to large carrier hotel facilities, where we are able to establish peering with hundreds of content sources, broadband access providers, and other hosting companies.
- Packet also uses its connectivity into carrier hotels to purchase IP connectivity from a diverse assortment of providers. Rather than practice least-cost routing, these vendors are surgically selected, with a focus on deliverability into difficult-to-breach access networks.
- Both these types of connections go over our own leased dark fiber network. We deploy DWDM muxes, so we're able to turn up incremental capacity at the drop of a hat—up to 800Gb/s of it in total!
- At Packet, the only "QoS" in our vernacular is "Quantity of Service". Every aspect of our network—from our intra-datacenter interconnects, through to our backbone network and third-party provider connections—is engineered with copious amounts of spare capacity, so we won't be caught by surprise due to traffic bursts.
In addition, every hardware role in our network is fully redundant, from our backbone routers all the way down to the "top-of-rack" (some of them are actually in the bottom or center, given our high-density equipment layout, but you get the picture) edge devices. At Packet, downtime is simply not a word in our vocabulary, whatever the reason.
Customers experiencing intermittent connectivity or slow traffic speeds are encouraged to e-mail email@example.com to engage our engineering team.
Does Packet allow multi-cast?
Packet does not support multi-cast in its default Layer 3 network topology. This is due to performance and security concerns around multi-tenant switch and router scaling issues. However, there are options to accomplish this we would suggest a GRE tunnel.
Note: Packet’s Layer 2 service is based on EVPN and VXLAN. We cannot run multicast services within the Layer 2 network.
You can find more information regarding BGP and how to set it up here.
What are the advantages of a Layer 3 network topology?
With Packet's layer-3 network design, each compute host is directly attached to a physical switch via either 2 x 1Gbps copper or 2 x 10Gbps SFP+ connections. This is a bit different from other / older hosting environments which use a shared overlay network.
At Packet, software overlay networks are applied to create our elastic networking capabilities and to ensure elastic (cloud-style) networking. Instead we move IPs and traffic around via our top of network switches, helping to create a more flexible and performant network environment.
Our purpose-built network is 100% IPv6-ready with no sluggish overlays. Packet is a full dual-stack provider so each server is issued its own /64 on provision.
Layer 2 Support?
What nameservers do I use to point my domain to Packet?
We do not offer forward DNS. We do, however, provide reverse DNS on the management IP address of the server. This is created during the server provisioning process.