In a word, substantial. I have seen numerous businesses that attempt to create "Tier 2" virtual environments using their last-generation disk hardware, only to blame the virtual platform for the resulting poor performance.
One of those was a consulting client I visited recently that was comparing Hyper-V and vSphere. This group was concluding that the experiment was a failure for Hyper-V because of a substantial performance disparity between the two platforms. Simply executing the dir command on their file structure was taking minutes, rather than seconds—completely non-functional from an operational perspective.
Thinking something was amiss, I asked this group about the similarities in the underlying architecture. Turns out that their vSphere solution was connected more or less directly to its high-speed SAS-based SAN, right through their networking core. The Hyper-V infrastructure, on the other hand, was connected to their end-of-life SATA-based SAN, spanning multiple hops, in another building, across cabling that was shared with the rest of the infrastructure.
Their response to my querying: "Well, that is gigabit fiber between the two buildings!"
Indeed it was, but that fiber was also carrying all the production networking for the two buildings, in addition to the storage traffic for Hyper-V. Further complicating matters was the fact that the servers and storage were spread between two buildings, forcing any file operations to hop twice across this shared infrastructure.
Making matters even more challenging was the fact that storage bottlenecks aren't necessarily the easiest bottleneck to visualize within common performance counters. You can't always see poor throughput in Hyper-V's counters, nor on the counters presented by the remote SAN. To them, the problem just seemed to be Hyper-V.
The moral of this story is that while iSCSI lets you present a LUN to anywhere with a network connection, doing so just anywhere won't get you the best performance. Further, extending those LUNs across long ranges and separating virtual servers from SAN storage can create situations where file operations require multiple jumps across the network to get to their final resting place. When that network is shared between production networking and storage networking, you're only adding to the performance shortfall.
Carefully consider the impact of your storage's location. The tools to troubleshoot a bad decision aren't great, or at least require advanced skills to sleuth. And the result of a bad decision will be unnecessarily poor performance for your entire IT operation.