Alternatives of Cisco 24-Port Gigabit PoE+ Managed Switch

In modern offices and homes it is quite common to see several devices that utilize power over Ethernet (PoE, 802.3af), such as wireless access points, Internet cameras and voice over IP phones. For a midsize office or a relatively large house it often requires a gigabit PoE switch to power all these devices. And a 24-port gigabit PoE switch is popular among most users. But in the past ten years some devices are designed to use more power than traditional PoE, which demand the newer PoE+ (802.3at) standard that delivers higher power over an Ethernet cable. So 24-port gigabit PoE+ managed switch is used to power them. The Cisco Catalyst 2960S-24PS-L is eligible in every aspect when cost is not a problem. In this post we’re going to find some 24-port gigabit PoE+ managed switches that can replace this switch in most situations.

24-port gigabit PoE+ managed switch

Overview on Cisco 24-Port Gigabit PoE+ Managed Switch

Cisco WS-C2960S-24PS-L is the 24-port gigabit PoE+ managed switch model of the Catalyst 2960-S series switches. It is a managed layer 2 switch with 24 Ethernet 10/100/1000 PoE+ ports and 4 gigabit Ethernet SFP ports. Its total available PoE power is 370 Watts, which means it can support up to 24 PoE devices or up to 12 PoE+ devices. (To calculate how many PoE/PoE+ devices the switch supports, simply divide the total PoE Budget by 15.4W/30W.) Its switching bandwidth and forwarding rate are 176 Gbps and 41.7 Mpps respectively. Some other parameters that we will take into consideration are VLAN IDs (4000), maximum VLANs (256) and jumbo frames (9216 bytes). It is a fully managed switch that support WEB GUI, CLI, Telnet, SNMP (v1, v2, v3).

Cisco 2960S 24-port gigabit poe+ managed switch

Comparison of 24-Port Gigabit PoE+ Managed Switches

Cisco Catalyst 2960s-24PS-L is an excellent 24-port gigabit PoE+ managed switch. Since there are cases when we want to support the same number of PoE/PoE+ devices but do not require a 176Gbps backplane bandwidth, or to cut the budget down as well, we want to find a replacement for this full-managed Cisco 24-port gigabit PoE+ switch. Here are four different 24-port gigabit PoE+ managed switches that have equal numbers of VLAN IDs, maximum VLANs and jumbo frames with Cisco Catalyst 2960s-24PS-L. They are HP 2920-24G-PoE+, Netgear M4100-24G-POE+, Ubiquiti US-24-500W and FS.COM S1600-24T4F. The following table gives some information of them.

Switch Model Cisco WS-C2960S-24PS-L HP 2920-24G-PoE+ Netgear M4100-24G-POE+ Ubiquiti US-24-500W FS S1600-24T4F
Device Type 24-port Gigabit PoE+ managed, Layer 2 24-port Gigabit PoE+ managed, Layer 2+ 24-port Gigabit PoE+ managed, Layer 2+ 24-port Gigabit PoE+ managed, Layer2 24-port Gigabit PoE+ managed, Layer 2+
Ports 24 RJ45 10/100/1000 PoE+ ports, 4 1G SFP ports 24 RJ45 10/100/1000 PoE+ ports, 4 combo ports 24 RJ45 10/100/1000 PoE+ ports, 4 combo ports 24 RJ45 10/100/1000 PoE+ ports, 2 1G SFP ports 24 RJ45 10/100/1000 PoE+ ports, 2 combo ports, 2 1G SFP ports
Switching Capacity 176 Gbps 128 Gbps 48 Gbps 52 Gbps 52 Gbps
Forwarding Rate 41.7 Mpps 95.2 Mpps 35.714 Mpps 38.69 Mpps 38.69 Mpps
PoE Budget 370 W 370 W 380 W 500 W 600 W
Price $1,165.00 $1,139.00 $671.84 $528.79 $419.00

From the table we can see that the 24 RJ45 ports of these five switches are all 802.3af/at compliant. And each switch is designed with 2/4 gigabit fiber uplink ports. The main differences between them are the switching capacity, forwarding rate and PoE budget.

Comparing Cisco WS-C2960S-24PS-L with HP 2920-24G-PoE+, they have similar new device price and identical PoE budget. The HP 24-port gigabit PoE+ managed switch also has a more than 100Gbps switching capacity but much higher forwarding rate than the Cisco 24-port gigabit PoE+ managed switch. They can support the same number of PoE/PoE+ devices.

The Netgear M4100-24G-POE+, Ubiquiti US-24-500W and FS.COM S1600-24T4F have much smaller switch fabrics and slightly lower forwarding rates than the Cisco model. M4100-24G-POE+ supports the same number of PoE/PoE+ devices as Cisco Catalyst 2960S-24PS-L. It’s half the price of the Cisco model. But it has the smallest switch fabric and lowest forwarding rate among the five 24-port gigabit PoE+ managed switches. The Ubiquiti US-24-500W and FS.COM S1600-24T4F have higher PoE budget than the other three switch models. So they can support more PoE/PoE+ devices simultaneously. The prices of the last two switch models are the lowest among them. And the 24-port gigabit PoE+ switch S1600-24T4F has the highest total PoE budget in comparison.

24-port gigabit PoE+ managed switch fs.com S2600-24T4F

Summary

In this article we intend to find some 24-port gigabit PoE+ managed switches that can be used to replace the Cisco Catalyst 2960S-24PS-L in some situations. If you want to replace it with an equivalent 24-port gigabit PoE+ managed switch but with higher forwarding rate, the HP 2920-24G-PoE+ is a suitable choice. If the switching fabric is not a key requirement and there’s need to pare the budget down, have a look at the Netgear M4100-24G-POE+, Ubiquiti US-24-500W and FS.COM S1600-24T4F. Considering the total PoE/PoE+ devices that will be used in the switch, if more than 12 PoE+ devices are to be connected, the Ubiquiti US-24-500W and FS.COM S1600-24T4F are better options.

Source: http://www.fiber-optic-equipment.com/alternatives-cisco-24-port-gigabit-poe-managed-switch.html

 

Advertisements

How to Build a Data Center of 40G Networking With 32-Port 40G Switch?

Earlier before this year we did not anticipate the shared bikes would be widely spread all over the world, but now at the end of this year they are already everywhere. Shared bike is one of the instances of the Internet of Things (IoT), and there are many other applications that have witnessed the development of network-dependent technologies, such as self-driving cars, smart mobile phones/pads, etc. They are all calling for high bandwidth and low latency. But the old network infrastructure of data centers is not capable enough in such an environment, especially for those data centers that should deal with a huge amount of traffic. So some data centers are upgrading from 10G networking to 40G networking by using 40 Gigabit Ethernet switch, of which a 32-port 40G switch is a typical choice.

Limits of Old Data Center Network Infrastructure

What are the limits of old data center network infrastructure? In the past, the major traffic in data centers is in the north-south direction. As for data center switches, it is enough to use 10G uplink ports between the Top of Rack (ToR) switches and the aggregation switches. But as new applications and services rapidly emerge, the traffic between the end user and the data center is increasing, and the traffic in the east-west direction within the data center is increasing as well. Issues of congestion, poor scalability and latency occur when data centers keep using traditional network infrastructure.

The New Fabric for Data Center 40G Networking With 32-Port 40G Switch

In order to meet the requirements of the ever increasing network applications and services, data centers are constantly seeking better solutions. The primary problems are about bandwidth and latency. So one important thing is to upgrade from 10G networking to 40G networking. Since the 40G switch price and the 40G accessory price have dropped a lot, it is feasible to deploy 32-port 40G switches in the aggregation layer. In order to reduce the latency, it is wise to adopt the new spine-leaf topology compared with the old topology.

Scaling Example by Using 32-Port 40G Switch

A network based on the spine-leaf topology is considered highly scalable and redundant. Because in a spine-leaf topology, each hypervisor in the rack connects every leaf switch. And each leaf switch is connected to every spine switch, which provides a large amount of bandwidth and a high level of redundancy. In a 40G networking, it means every connection between the hypervisor and the leaf switch, the leaf switch and the spine switch is both at 40G data rate. In a spine-leaf topology, the leaf switches are the ToR switches and the spine switches are the aggregation switches.

data center 40G networking in spine-leaf topology

One principle in spine-leaf topology is that, the number of leaf switches is determined by the number of ports in the spine switch, at the same time the number of the spine switches equals the number of connections used for uplink. For a 32-port 40G switch like FS.COM N8000-32Q, it can have a maximum of 32 40G ports, but some ports should be used for uplinks to the core switches. In this case, we use 24 40G ports for connectivity to the leaf switches, meaning there are 24 leaf switches in each pod. The leaf switch we use is the FS.COM S5850-48S6Q, a 48-port 10G switch with 6 40G uplink ports. Each leaf switch has 4 40G uplinks to the spine switch. Then each spine switch connects to the two core switches.

data center 40G networking with 32-port 40G switch

Better Enhance the 40G Networking by Zones

This new data center fabric by using 32-port 40G switch is an improvement in bandwidth and latency, but it is not perfect either. For every network switch, it has limits on its memory, including the memory of MAC addresses, ARP entries, routing information, etc. Particularly for the core switch, the number of ARPs it can store is still limited compared with the large number it has to deal with.

Therefore, there’s need to split the network into zones. Each zone has its own core switches, and each pod has its own spine switches. Different zones are connected by edge routers. By adopting this design, we are able to expand our network horizontally as long as there are available ports on the edge routers.

data center 40G networking with 32-port 40G switch optimized by zones

Conclusion

The transformation of data centers is mainly due to the demand of the users. The increasing amount of networking applications and traffic pushes data centers to evolve from old fabric to new fabric. So some data centers have changed from 10G networking to 40G networking by using 40 Gigabit Ethernet switch as spine switch like 32-port 40G switch. And better optimized design is adopted to ensure the desired performance of the new 40G network.

Source: http://www.fiber-optic-transceiver-module.com/how-to-build-a-data-center-of-40g-networking.html

 

What Is a Core Switch and Why Do We Need It?

Network switches are categorized into different types according to different principles, such as fixed switch and modular switch based if you can add expansion module to it, and managed switch, smart switch and unmanaged/dumb switch depending on whether you can configure it and the complexity of the configuration. Another way to classify the type of a network switch is by the role it plays in a local area network (LAN). In this case, one switch is considered to be an access switch, an aggregation/distribution switch or a core switch. In small networks we do not see core switch. So many people are having questions about what core switches are. Do you know what is core switch? Is there only one core switch in a network? What are the differences between core switch and aggregation/access switch?

What Is Core Switch?

If we spend some time looking up dictionaries for the meaning of core switch, we will find a definition similar to “A core switch is a high-capacity switch generally positioned within the backbone or physical core of a network. Core switches serve as the gateway to a wide area network (WAN) or the Internet—they provide the final aggregation point for the network and allow multiple aggregation modules to work together (An excerpt from Techpedia).” The definition explains its high-capacity feature, the physical location and its function of connecting multiple aggregation devices in network.

What Are the Differences Between Core Switch and Other Switches?

The biggest difference between core switch and other switches is that, core switch is required to always be fast, highly available and fault tolerant since it connects all the aggregation switches. Therefore, a core switch should be a fully-managed switch. But if it is a switch not used in the core layer, it can be a smart switch or an unmanaged switch.

Another difference is that, the core switch is not always needed in a LAN while we may often have the aggregation switch and the access switch. Because in small networks that have only a couple of servers and a few clients, there’s no actual demand for a core switch vs aggregation switch. In the scenario where we don’t need the core layer, we often call it a collapsed core or collapsed backbone since the core layer and the aggregation layer are combined.

The third difference is that there’s generally only one (or two for redundancy) core switch used in a small/midsize network, but the aggregation layer and the access layer might have multiple switches. The figure below shows where the core switch locates in a network.

Core switch in the core layer

What Should Be Kept in Mind When Using Core Switch?

The first thing we should keep in mind is that core switch is urgently required in two occasions. One occasion is when the access switches are located in different places and there is a aggregation switch in each place, then we need a core switch to optimize the network. Another occasion is when the number of the access switches connecting to a single aggregation switch exceeds the performance of it, and we need to use multiple aggregation switches in a single location, then the use of core switch can reduce the complexity of the network.

With core switch and without core switch

As for specific type and number of core switch that we should adopt in a network, that depends on the scale and budget of our network, including how many servers, clients or lower layers switches we have. For example, say that a small network has 100 users and has 6 48-port Gigabit aggregation switches, a suitable core switch will be like Juniper EX2200, Cisco SG300, or FS.COM S5800-8TF12S.

The second thing is that a core switch should be fully-managed, which means it should support different method of management, such as web-based management, command line interface and SNMP management. Also it should have some advanced features like support for IPv6, built-in Quality of Service (QoS) controls, Access Control Lists (ACLs) for network security.

And generally the connections to the core layer should be the highest possible bandwidth. In addition, since the core switch act as the center of a LAN, it should be able to reach any devices in the network, not directly but within the routing table. A core switch is usually connected to the WAN router.

Conclusion

In the design of a network, there might be access layer, aggregation layer and core layer. Though the core layer is not required in smaller networks, it is indispensable in medium/large networks. And the high-capacity core switch plays an important role in delivering frames/packets as fast as possible in the center of the network. Its contribution can not be underestimated especially in networks where speed, scalability and reliability are key to users.

Source: http://www.fiber-optic-tutorial.com/what-is-a-core-switch.html

 

Recommendations on “Managed” Fanless Gigabit Switch 24-Port or Less

When will we consider to buy a managed fanless Gigabit switch 24-port or Less? Common situations where we use fanless Gigabit managed switches are for small office connectivity or for home lab upgrading. In these occasions, we pursue the speed of Gigabit because they can improve the end user experience or enhance work efficiency. At the same time, we require the machine to cause low noise so users in these environments will not be disturbed. As for the management functions, different people want different levels of managing of their network. But in overall managing a fanless Gigabit switch 24-port or less is not expected to be as complex as managing an fully-managed switches. Otherwise, the user experience will not be enhanced but in contrary be decreased. This post will recommend some easily managed 8-24 ports fanless Gigabit switches. For more information of whether to choose a fanless switch or with fan switch, please read the post Should You Buy a Fanless Switch or Switch With Fan?

managed fanless gigabit switch 24-port

Recommend Managed Fanless Gigabit Switch 24-Port or Less

A fanless switch usually will not be more than 24 ports. When a switch has more than 24 ports, for example, in a 48-port switch the power supply has to be big enough and there are many ships inside the box, if there’s no fans the air flow might become a problem. So the fanless Gigabit managed switches that we’re going to recommend will be 24-port or less. And they are all non-PoE switches.

Managed Fanless Gigabit Switch 24-Port

There are many fanless Gigabit switches that are 24-port in the market, and the five models that we’re going to recommend are from four brands. They are HP Procurve 1800-24G, 1810-24G smart-managed Gigabit switch, Cisco Catalyst 2960XR-24TS-I 24-port fanless Gigabit switch, FS S2800-24T4F fanless 24-port Gigabit managed switch and Zyxel GS1900-24 smart managed switch.

Table 24-port fanless gigabit managed switch

They have some characteristics in common that make them suitable for being used in places like home office and small office. The similarities of them include low power consumption and Gigabit fiber uplink ports. And of course the most important property is that they are silent in operation.

Another key factor that makes these five switches qualified in the managed fanless Gigabit switch 24-port list is their management function. These five switches are all managed switch that provide full layer 2 traffic management features and simple network management via Web GUI.

Cost-wise the HP 1800-24G, HP 1810-24G, FS S2800-24T4F and Zyxel G1900-24 are all good choices. The Cisco Catalyst 2960XR-24TS-I will cost more than the other four but it surely provides more some more advanced features belonging to layer 3. If we need stronger data transferring capability, Cisco Catalyst 2960XR-24TS-I is a good choice considering its backplane and forwarding rate.

In terms of the power consumption, we can notice that among them the two switch models FS S2800-24T4F and Zyxel G1900-24 consumes up to 20W power, while the FS S2800-24T4F provides two more combo Gigabit SFP/RJ45 ports for up-linking. The cost of buying a brand new fanless Gigabit switch 24-port FS S2800-24T4F or a Zyxel G1900-24 is near, too.

24-port fanless managed switch fs S2800-24T4F

Managed Fanless Gigabit Switch 8/12-Port

If we have only a few devices to be connected to a fanless switch, then we can take 8/12-port fanless Gigabit switch into consideration. There are some good 8-port or 12-port fanless Gigabit managed switches popularly used by end users as well.

The HP 1800-8G and HP 1810-8G are two 8-port fanless Gigabit switches. They both have 8 10/100/1000BASE-T ports. They are cost-effective fanless switches if we do not require CLI management, STP (Spanning Tree Protocol) or other advanced management features. The HP 1800-8G/1810-8G has a switch capacity of 16 Gb/s and a forwarding rate of 11.9 Mpps. The maximum power rating of HP1800-8G is 18W and the HP 1810-8G is 15W. Another two 8-port fanless Gigabit switches of Cisco 2960 and 2960G are also favorable options. They are the Cisco WS-C2960G-8TC-L and Cisco WS-C2960-8TC-L.

The 12-port fanless Gigabit switch we want to recommend is Juniper EX2200-C12T-2G. It is a fanless Gigabit switch with 12 10/100/100BASE-T ports and 2 combo Gigabit SFP/RJ45 uplink ports. It is in standard 1RU package and the maximum power consumption is 30 W. In addition to all the layer 2 features, it also provides static routing.

Conclusion

Fanless Gigabit switch 24-port or less is best for use in environments that require low noise and Gigabit speed. And fanless Gigabit switch managed is a wise choice for users because it provides beneficial traffic control and network management ability.

Source: http://www.fiber-optic-transceiver-module.com/recommendations-on-managed-fanless-gigabit-switch-24-port-or-less.html

 

What Benefits Can 10GBASE-T Copper Bring to Data Centers?

More than ten years ago 10GBASE-T was still a bud that had not shown its real beauty to us, but now it has been brought to data centers for almost ten years. A few years after 10GBASE-T copper was introduced into the market, it became widely available as LAN on Motherboard (LOM, a chipset that has been embedded directly on the motherboard and capable of network connections) or add-in adapters on servers. Why is it popularly adopted in data centers? What benefits does 10GABSE-T copper bring to data centers?

10GBASE-T copper switch and twisted pair

Plentiful Benefits of 10GBASE-T Copper Adoption

The benefits of 10GBASE-T copper can be measured in more than one aspect, including both obvious hardware part and hidden profits it can bring.

Nature Endowed Benefits

Like previous copper BASE-T standards, 10GBASE-T copper is backward compatible with 10/100/1000BASE-T. So using 10GBASE-T allows for a seamless migration from lower data rate to 10GbE. When used with already existing copper BASE-T, they can auto-negotiate to the highest performance transmission mode they both support. So old 10/100/1000BASE-T devices are still supported even if the faster Ethernet and Gigabit Ethernet copper switches in data centers are replaced by 10GBASE-T copper. This will also accelerate the expansion of 10G to the end users.

The second benefit of 10GBASE-T copper adoption is the cost-efficiency to use cheap twisted-pair copper cabling for lowest cost 10GbE deployment. In addition, it provides the advantage of carrying on the experience and expertise built on prior-generation BASE-T knowledge and training.

The third improvement is in power consumption. Comparing with 1000BASE-T, 10GBASE-T copper consumes much less power on per-Gbps per-port basis. It is possible to achieve 1W per 10GbE port with 10GBASE-T technology, making it more efficient to design high-port density 10GBASE-T switches and LOMs. In the data center scenario, when there’s need to connect thousands of servers by 10GBASE-T, it will be a great decrease in power consumption comparing with previous Gigabit copper Ethernet. Therefore it will also be a large saving on energy and money.

low power consumption

Also when compared with copper direct attached cable (DAC), 10GBASE-T copper cabling provides longer distance to server or switch than DAC, extending the cable lengths from several meters to up to 100 meters. It will be long enough to support nearly all data center topologies.

Future-proofing for 40GBASE-T

From copper fast Ethernet to copper Gigabit Ethernet, and now to 10GBASE-T copper, we have been enjoying the continuity of BASE-T networks. But the efforts on copper network does not stop at this level. Future-generation BASE-T is under development. In order to make use of the advantages of low cost twisted pair to compensate the gaps that DAC can not do, twisted pair is anticipated to support up to 30 m of structured cabling for 40GBASE-T. This will allow one interconnect in both Top of Rack and End of Row network infrastructure.

PoE ports are adopted in copper fast Ethernet PoE switches and Gigabit PoE switches. There’s potential for 10GBASE-T to support PoE technology as well. So in near future, IP devices relying on PoE are able to upgrade to higher bandwidth and users will enjoy better performance brought by the improvement.

Conclusion

10GBASE-T copper benefits data centers by providing convenient and cost-effective 10G solutions, reducing the power consumption, expanded cabling scale, future-proofing 40GBASE-T and 10GBASE-T PoE ability. The successive adoption of BASE-T networks will continue to benefit data centers as well as end users.

 

Differentiate the 3 Technologies: Switch Stacking vs Cascading vs Clustering

When we have more than one switches on hand we often seek to a better way in making use of them and managing them. There are mainly three technologies that we might use when we interconnect or combine several switches together, which are switch stacking, cascading and clustering. For many people that firstly get in touch with these terms, they can’t figure out the differences between them. Some discussions of the switch stacking versus switch clustering and switch stacking versus switch cascading have been put forward, but a comprehensive comparison between them has not been made. So this post is a discussion of switch stacking vs cascading vs clustering.

switch stacking vs cascading vs clustering

Switch Stacking vs Cascading vs Clustering

The comparison of switch stacking, cascading and clustering should be based on knowing the meaning of these technologies. So firstly we will see what switch stacking, cascading and cluster are.

What Are Switch Stacking, Cascading and Clustering?

Switch stacking is a technology that combines two or more switches together at the backplane typically via a specialized physical cable (stack cable), so they work like a single switch. The group of switches form a “stack”, and it requires a stack master. There’s also virtual stacking, where switches are stacked via Ethernet ports rather than stack cable/module. In such scenario, switch stacking vs cascading seems to be much similar. The port density or the switch capacity of a stack is the sum of the combined switches. For example, when you cascade two 24-port switches, you will get one large 48-port switch when it comes to configuration. And all the switches in the stack share a single IP address for remote administration instead of each stack unit having its own IP address. Only stackable switches are able to be stacked together. And it should be noted that, when the switches are stacked, there is no need to connect switches in the group via copper or fiber port besides the stacking ports, because the stack logically is one switch. It is like connecting 2 ports together on the same switch, which can cause loop.

By cascading more than one switch you can have multiple ports interconnecting each of your switches in the group. But they are configured and managed independently. Switches that are cascaded together should all support Spanning Tree Protocol (STP), in order to allow redundancy and to prevent loop. Generally switches of any models or from any manufacturers can be cascaded. But it does not rule out the cases that two switches can not be cascaded.

A switch cluster is a set of switches connected together, whether through common user port or special ports. One switch plays the role of cluster command switch, and other switches are cluster member switches, which are managed by the command switch. In a switch cluster only one IP address is needed (on the command switch). Not all switches can be clustered. Only specific cluster-capable switches from the same manufacturer can be clustered. And different manufacturers may use different software for clustering.

Switch Stacking vs Cascading

Where it comes to switch stacking vs cascading, the most obvious difference is that only stackable switches can be stacked while almost all switches can be cascaded. And the stackable switches are generally of the same model or at least belonging to the same manufacture.

In a switch stack, the port capacity is the combination of all the member switches and the bandwidth is also the sum of all switches. But by cascading switches, the bandwidth will not be increased. There’s even possibility of congestion at the cascade ports if you have only one connection between each switch.

The stack is managed as a whole. When you configure one switch, the change will be duplicated to every other switches in the stack, which is time-saving. However, in a switch cascade, you have to manage and configure every switch separately.

Stacking has a maximum number of stackable switches that you can have in a group. For example, you can connect up to four FS S3800-24F4S or FS S3800-24T4S in a stack. The switch cascading has limitation on the layers that you can have, which are usually the traditional three layers topology: core, aggregation and access. When the limitation is exceeded, there might be problems of latency and losing packet.

FS S3800-24F4S or FS S3800-24T4S stackable switch

Switch Stacking vs Clustering

Stacking and clustering is very similar in that a stack or a cluster both use only one IP address, and member switches are managed as a whole. So when you wan to simplify the management of multiple switches, both stacking and clustering are technologies that can be adopted.

Stacking might be a bit easier to configure since the stack can automatically recognize new stack member, while in a cluster, you have to manually add a device to be the switch cluster. The management of stack members is through a single configuration file. Cluster members have separate, individual configurations files. So the management by a stack master is complete on every stack switch, but the cluster command switch is the point of some management for all cluster members.

The distances between clustered switches can be more flexible. They can be in the same location or they can be located across a Layer 2 or Layer 3. But stacked switches are in the same layer and generally they are located in the same rack. Only virtual stackable switches can be placed in different locations.

Conclusion

After reviewing the discussion of switch stacking vs cascading vs clustering, you may find that the three technologies have the similarity that switches in a stack/cascade/cluster group need to be physically connected. Some are through common Ethernet ports, while some are through special stack ports. Cascading has the minimal requirements on the switch model, while both stacking and clustering require the switches to be stackable/cluster-capable, and are of the same model or at least from a single manufacturer. Stacking and cascading are based on hardware implementation while clustering is based on software implementation. The management of a stack is the most complete among the three.

Source: http://www.fiber-optic-equipment.com/differentiate-3-technologies-switch-stacking-vs-cascading-vs-clustering.html

Buy PoE Switch: 48-Port Switch vs 2 24-Port Switches

When we have about 30 PoE and non-PoE mixed connections in our network, the problem of buying a 48-port PoE switch vs. 2 x 24-port PoE switch always puzzles us. If we already have one 24-port PoE switch in use and we’re just adding more ports, we can choose to buy a single 16-24 ports PoE switch or a 16-24 ports Ethernet access switch to connect the increased devices. But as for a newly built network or 30 newly-deployed PoE devices, we have to balance the pros and cons of choosing one 48-port switch vs. 2 24-port switch.

48-Port Switch vs 2 24-Port Switch

PoE Connectivity: 48-Port Switch vs. 2 24-Port Switch Debate

In terms of the cost, usually one 48-port PoE switch will cost more than two 24-port PoE switches of the same model, but it does not mean always. For example, buying the 48-port PoE+ managed switch FS S1600-48T4S is lower than buying two 24-port PoE+ managed switches FS S1600-24T4F. If we have a tight budget and concerns cost saving most, the 1 x 48-port switch vs. 2 24-port switch debate can end here by buying the cheaper choice. Otherwise, we have more factors to consider.

Concerns of Installing 2 x 24 Port PoE Switch

In the options of one 48-port switch vs. 2 24 port switches, if we choose to do the job with two 24-port PoE switches, then we may have to undertake these shortcomings, unless they do not matter in our case. Firstly, two 24-port PoE switches take up more space than a 48-port PoE switch. A fixed-chassis 48-port PoE switch takes up a standard 1 RU space of the rack while 2 x 24-port PoE switch will use more space than that whether it is 1RU size or smaller. Secondly, if the 2 x 24-port PoE switches are not stacked then we have to do trunk between those two switches, which will eat up ports and give you only 46 ports available. At the same time it provides additional potential of bottleneck at the uplink port. Since the internal traffic on a switch is going to be gazillions of times faster than a 1G or even a 10G uplink between switches. But for a 48-port one, it will have less issues with bottleneck/congestion. The last concern is that two 24-port PoE switches are harder to manage than one 48-Port PoE switch, even when stacking the two.

Concerns of Installing 1 x 48 Port PoE Switch

When we decided to install only one 48-port switch versus 2 x 24-port switch, there are also some concerns in practice. The biggest issue is that we lose redundancy. If we have only one switch and it fails, we’re chained until we get the replacement, which could be over 24hrs away. But in a two switches’ scenario, if one switch fails at least half of our devices can still be up and running. Another thing we may lose is the separate placing of the 2 x 24-port switches. If we have a single rack to install them, then there’s no issue but if we want to place desktop switch for IP cameras and IP access points in different offices, we may not go with a 48-port PoE switch.

Suggestions for Selection

After the discussion about 48-port switch vs. 2 24-port switches, here are the conclusions we have. In terms of better performance, the a 48-port PoE switch is over 2 x 24-port switch. There’s less possibility of creating congestion between the two switches at the uplink ports. For easier management of the devices, it is also suggested to go with one single 48-port PoE switch rather than 2 x 24-ports. All ports on the 48-port PoE switch could communicate between them at wire speed. When we need the redundancy, we’d better go with 2 x 24-port PoE switch. If we want to avoid some problems brought by trunking and separate managing, we can choose stackable PoE switches or modular switch with two 24-port modules, which will provide large backplane and can be managed as a whole.

Ending

The concerns that we discuss in this post are general ones that we may have in choosing one 48-port switch vs. 2 24-port switch for PoE devices. The final decision should depend on our key purpose of buying them. The above factors are several things that we can take into account when we face the similar issue.

Related article: How to Choose a Suitable 48-Port PoE Switch?

Original post: http://www.fiber-optic-equipment.com/buy-poe-switch-48-port-switch-vs-2-24-port-switches.html