On hamiltonicity of uniform random intersection graphs

We give a sufficient condition for the hamiltonicity of the uniform random intersection graph Gn,m,d. It is a graph on n vertices, where each vertex is assigned d keys drawn independently at random from a given set of m keys, and where any two vertices establish an edge whenever they share at least one common key. We show that with probability tending to 1 the graph Gn,m,d has a Hamilton cycle provided that n = 2 m(lnm+ ln lnm+ω(m)) with some ω(m) → +∞ as m → ∞.

edges. Indeed, for any three vertices, say x, y, z we have while the edge probability p ′ d = p ′ d (n) = P(x ∼ y) = d 2 m −1 + O(d 4 m −2 ) tends to 0 as m → ∞. Here x ∼ y denotes the adjacency relation. In what follows we say that an event holds with a high probability (whp for short) if its probability tends to 1.
Lemma 1 below gives a sufficient condition for the hamiltonicity of G n,m,d in terms of n and m (n should be sufficiently larger than m). A necessary condition is formulated in Lemma 2. These conditions refer to the same order but the constants do not match. In particular, we show that for n > (0.5 + ε)m ln m the graph G n,m,d is Hamiltonian whp, while for n < (d −2 − ε)m ln m the graph G n,m,d has no Hamilton cycle whp as m → ∞. Here ε > 0 is arbitrarily small.
Then whp G n,m,d is Hamiltonian as n → ∞.
Then the following statements hold true: (i) whp G n,m,d has a vertex of degree at most 1, (ii) whp G n,m,d has no Hamilton cycle.
Let us compare the Hamiltonicity threshold of the classical Erdős-Rényi graph G(n, p), where edges are inserted independently with probability p, with our results for G n,m,d . Recall that G(n, p) has a Hamilton cycle with probability tending to 1 whenever p = p(n) = n −1 ln(n) + ln ln(n) + ω(n) for some ω(n) → +∞, as n → ∞, see, e.g. [6]. From Lemmas 1 and 2 we conclude that G n,m,d is Hamiltonian whp for p ′ 2 = (2 −1 d 2 + ε)n −1 ln n, and G n,m,d has no Hamilton cycle whp for p ′ 2 = (1 − ε)n −1 ln n. Here ε > 0 is arbitrarily small. Our sufficient condition (3) provides an improvement upon the corresponding condition shown in Theorem 2 of [14], n (1 + ε) m d ln m d .

Proofs
Here we prove Lemmas 1 and 2. An auxiliary result used in the proof of Lemma 1 is stated separately in Lemma 3 at the end of the section.
Proof of Lemma 1. Given S i , let s i ⊂ S i be a random subset of size 2. Let G * be a random multi-graph on the vertex set W = {w 1 , . . . , w m } with the edge set {s 1 , . . . , s n }. Let n ′ be the number of distinct edges s i and let G ′ ⊂ G * be the subgraph containing all but distinct edges. Observe that given n ′ , the graph G ′ has the same distribution as the uniform Erdős-Rényi graph on m vertices having n ′ random edges (where the set of edges is uniformly distributed in the class of all subsets of size n ′ of the set of all possible pairs of vertices). It is known (see, e.g., [6]) that such an Erdős-Rényi graph contains a Hamilton cycle whp provided that for some ω(m) → ∞ as m → ∞. It follows from (3), by Lemma 3, that (6) holds with a high probability. Therefore, whp the graph G ′ is Hamiltonian. Now the lemma follows from the simple observation that the hamiltonicity of G ′ implies the hamiltonicity of G = G n,m,d . Indeed, let s i1 , . . . , s im be (the edges of) a Hamilton cycle of G ′ . The corresponding vertices v i1 , . . . , v im build a cycle, say C, in G. Let V ′ denote the set of vertices outside C. Split V ′ = V 1 ∪ V 2 ∪ · · · ∪ V m into non-interecting classes of vertices such that, for every j, all vertices of V j share the attribute s ij ∩ s ij+1 (we define s im+1 := s i1 ). In particular vertices from V j belong to a clique of G attached to the cycle C and containing vertices v ij and v ij+1 . For all vertices of G are covered by the cycle C and several cliques attached to its edges we can extend C to a Hamilton cycle. ⊓ ⊔ Proof of Lemma 2. Note that (i) implies (ii). We need to prove (i). In the proof we apply the folloving inequalities from [4]. Let S 1 and S 2 be two independent random sets uniformly distributed in the class of subsets of {1, . . . , m} of sizes x and y respectively. Then as m → ∞ Given v ∈ V , let I v denote the indicator of the event {d(v) 1}. Hence, I v = 1 whenever d(v) 1. Here d(v) denotes the degree of v (the number of neighbours of v in G n,m,d ). Let X = v∈V I v count vertices of degree at most 1. Note that (i) is equivalent to the limit P(X > 0) → 1 as n → ∞. In the proof of this limit we show that lim n EX = +∞ and √ VarX = o(EX) (8) and then derive the limit P(X > 0) → 1 from (8), by Chebyshev's inequality.
Now a simple caclulation shows that EX = na → +∞ for n, m satisfying (4). ⊓ ⊔ Assume that sets s 1 , s 2 , . . . are drawn independently at random one after another and N i counts the number of distinct sets among s 1 , . . . , s i . As long as we have N i−1 < A ′ we write I i = 1 in the case where s i differs from all previously drawn sets s 1 , . . . , s i−1 , and write I i = 0 otherwise. After the number of distinct sets reaches A ′ (i.e., for i satisfying N i A ′ ) we put I i = 1 in either case. Denote X = I 1 + · · · + I n . We have n ′ A ′ whenever X A ′ . Note that for every i, we have A coupling with a binomial random variable Y ∼ Bin(n, p ′ ) gives P(X A ′ ) P(Y A ′ ). Next we apply Chernoff's inequality to the latter probability, A simple calculation shows, that (np ′ − A ′ ) 2 /(np ′ ) → +∞ as m → ∞. Therefore, we obtain P(n ′ A ′ ) = P(X A ′ ) P(Y A ′ ) = 1 − o(1) thus completing the proof.