Geeks With Blogs

News

Running with Code Like scissors, only more dangerous

For a couple reasons I love to look at programming interview questions around the internet.  Most of them revolve around data structures and algorithms, which is one of my favorite topics in computer science; and as a hiring manager, I find it valuable to try to see what the industry is asking people before bringing them on board.  I came across this site tonight, and while it had a lot of questions that I’ve seen before, this one – a variant of the programming exercise I was given for my first real (non-interning) job application – stood out:

Implement `Shuffle` given an array containing a deck of cards and the number of cards. Now make it O(n).

This caught my eye for a couple reasons: first, why would the algorithm not ever be O(n)?  Second, if I think its obvious implementation is O(n), then do I have a misconception about what O(n) means?  The problem doesn’t give any other details about what “Shuffle” means either.

Still, this is my naive algorithm:

```void Shuffle ( cards[], length )
for i = 0 to length
n = Random % length
Swap(&cards[i], &cards[n])```

Swap() isn’t linear or otherwise; it’s constant time.  The only loop happening here is the single for loop, from 0 to the length of the array.  We can improve the entropy of the resultant shuffle by using a cryptographically-strong random number generator such as RNGCryptoServiceProvider, but it would require more memory.  Here’s an actual C# implementation:

Card.cs:

```using System;
namespace LinearBigOShuffle
{
public class Card
{
public Rank Rank { get; set; }
public Suit Suit { get; set; }
public override string ToString()
{
return string.Format("{0,-8}{1}", Suit, Rank);
}
}
public enum Suit
{
Club = 1, Diamond = 2, Heart = 3, Spade = 4,
}
public enum Rank
{
Ace = 1, Two = 2, Three = 3, Four = 4, Five = 5, Six = 6, Seven = 7, Eight = 8, Nine = 8, Ten = 10,
Jack = 11, Queen = 12, King = 13,
}
}
```
Program.cs:
```using System;
namespace LinearBigOShuffle
{
class Program
{
static void Main()
{
// initialize the cards array
Card[] cards = InitializeStandardDeck();
// Verify that the cards were initialized correctly
PrintCards(cards);
Console.WriteLine("Shuffling...");
Shuffle(cards);
PrintCards(cards);
// Wait for the user to press enter to exit.
}
private static void Shuffle(Card[] cards)
{
Random r = new Random();
int n = cards.Length;
for (int i = 0; i < n; i++)
{
int nextSwapIndex = r.Next(n);
Card temp = cards[nextSwapIndex];
cards[nextSwapIndex] = cards[i];
cards[i] = temp;
}
}
private static Card[] InitializeStandardDeck()
{
Card[] cards = new Card[52];
int index = 0;
for (int suit = (int)Suit.Club; suit <= (int)Suit.Spade; suit++)
{
for (int rank = 1; rank <= (int)Rank.King; rank++)
{
cards[index++] = new Card { Rank = (Rank)rank, Suit = (Suit)suit };
}
}
return cards;
}
private static void PrintCards(Card[] cards)
{
foreach (Card c in cards)
{
Console.WriteLine(c);
}
}
}
}
```

In this implementation, I’ve included a couple helper methods and assumed Ace-low, but I don’t think that really matters.  If you run the program, you’ll see the in-order deck and the shuffled deck.  There are a couple caveats with the Shuffle implementation I’ve provided:

• There is no guarantee that a card won’t end up in the same relative position in which it started, or the same absolute position.
• There is no guarantee that an individual card won’t move multiple times.

If you think about it though, neither of these caveats are untrue for really shuffling cards!

The question I raised earlier was, just how big is Big-O anyway?  Well, Big-O is great for analyzing algorithmic complexity, but that’s not always what we want when measuring the performance of an implementation.  I’m depending on a library function in the Random class!  What if it was incredibly slow, because it produced truly random numbers instead of pseudorandom numbers?  In that scenario, I would not have a terribly performant implementation, even if my algorithm was “theoretically” correct.

Big-O is great!  But I guess what I was trying to say here is…

Don’t think it’s the only thing that will impact your code!

Comments on this post: Just how Big is Big-O, anyway?

# re: Just how Big is Big-O, anyway?
The real question about the Random is not the speed, but its own big-O. Even if it is really slow, your algorithm would soon (i.e. with large enough n) overtake another algorithm which uses a much faster Random method, but has O(n2) in itself.
Left by Igor Brejc on Aug 07, 2009 7:33 AM

# re: Just how Big is Big-O, anyway?
> There is no guarantee that a card won’t end up in the same relative position in which it started, or the same absolute position.
> There is no guarantee that an individual card won’t move multiple times.

These two would be solved by moving the randomly selected card into either the first or last position. And wouldn't that be better than physically shuffling cards?

# In Ruby:
def shuffle( cards )
for i in 0..cards.length
cards.push( cards.delete_at( rand( cards.length - i) + 1))
end
end
Left by ct on Dec 04, 2009 12:50 AM