Fossil with Commonmark

Check-in [5d3c6df6ba]
Login

Many hyperlinks are disabled.
Use anonymous login to enable hyperlinks.

Overview
Comment:merge trunk
Downloads: Tarball | ZIP archive | SQL archive
Timelines: family | ancestors | descendants | both | svn-import
Files: files | file ages | folders
SHA1: 5d3c6df6ba196df8c4158d5b3d0b9f949a698b61
User & Date: jan.nijtmans 2015-03-09 14:12:49
Context
2015-03-16
20:32
merge trunk check-in: cd234b1c46 user: jan.nijtmans tags: svn-import
2015-03-09
14:12
merge trunk check-in: 5d3c6df6ba user: jan.nijtmans tags: svn-import
11:15
Add extra space between lines of the file-list in a timeline. check-in: c68c68d9d1 user: drh tags: trunk
2015-02-25
22:39
Merge trunk. Remove --no-svn-rev switch for "fossil import --svn", just use --incremental for that (svn-rev-?? tags are off by default, but switched on by --incremental) check-in: 89a56fe0c7 user: jan.nijtmans tags: svn-import
Changes
Hide Diffs Unified Diffs Ignore Whitespace Patch

Changes to skins/eagle/css.txt.

265
266
267
268
269
270
271




















span.diffln {
  color: white;
}

#canvas {
  background-color: #485D7B;
}



























>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
span.diffln {
  color: white;
}

#canvas {
  background-color: #485D7B;
}

.fileage tr:hover {
  background-color: #7EA2D9;
}

.fileage td {
  font-family: "courier new";
}

div.filetreeline:hover {
  background-color: #7EA2D9;
}

div.selectedText {
  background-color: #7EA2D9;
}

.statistics-report-graph-line {
  background-color: #7EA2D9;
}

Added skins/xekri/README.md.





>
>
1
2
"xekri" is a Lojban word that means "extermely dark-colored".
This skin was contributed by Andrew Moore.

Added skins/xekri/css.txt.



































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
/**************************************
 * Xekri
 */


/**************************************
 * General HTML
 */

html {
  background-color: #333;
  color: #eee;
  font-family: Monospace;
  font-size: 1em;
  min-height: 100%;
}

body {
  margin: 0;
  padding: 0;
}

a {
  color: #44e;
}

a:hover {
  font-weight: bold;
}

blockquote pre {
  border: 1px dashed #ee0;
}

blockquote pre, pre.verbatim {
  background-color: #000;
  border-radius: 0.75rem;
  padding: 0.5rem;
  white-space: pre-wrap;
}

input[type="password"], input[type="text"], textarea {
  background-color: #111;
  color: #fff;
  font-size: 1rem;
}

h1 {
  font-size: 2rem;
}

h2 {
  font-size: 1.5rem;
}

h3 {
  font-size: 1.25rem;
}


/**************************************
 * Main Area
 */

div.header, div.mainmenu, div.submenu, div.content, div.footer {
  clear: both;
  margin: 0 auto;
  max-width: 80%;
  padding: 0.25rem 1rem;
}


/**************************************
 * Main Area: Header
 */

div.header {
  margin: 0.5rem auto 0 auto;
}

div.logo img {
  float: left;
  padding: 0;
  box-shadow: 3px 3px 1px #000;
  margin: 0 6px 6px 0;
}

div.logo br {
  display: none;
}

div.logo nobr {
  color: #eee;
  font-size: 1.2rem;
  font-weight: bold;
  padding: 0;
  text-shadow: 3px 3px 1px #000;
  vertical-align: top;
  white-space: nowrap;
}

div.title {
  color: #22e;
  font-family: Verdana, sans-serif;
  font-weight: bold;
  font-size: 2.5rem;
  padding: 0.5rem;
  text-align: center;
  text-shadow: 3px 3px 1px #000;
}

div.status {
  color: #ee0;
  font-size: 1rem;
  padding: 0.25rem;
  text-align: right;
  text-shadow: 2px 2px 1px #000;
}


/**************************************
 * Main Area: Global Menu
 */

div.mainmenu, div.submenu {
  background-color: #080;
  border-radius: 1rem 1rem 0 0;
  box-shadow: 3px 4px 1px #000;
  color: #000;
  font-weight: bold;
  font-size: 1.1rem;
  text-align: center;
}

div.mainmenu {
  padding-bottom: 0.25rem;
}

div.submenu {
  border-top: 1px solid #0a0;
  border-radius: 0;
  display: flex;
  justify-content: space-around;
}

div.mainmenu a, div.submenu a {
  color: #000;
  padding: 0 0.75rem;
  text-decoration: none;
  vertical-align: middle;
}

div.mainmenu a:hover, div.submenu a:hover {
  color: #fff;
  text-shadow: 0px 0px 6px #0f0;
}

div.submenu select {
  background-color: #090;
}

/**************************************
 * Main Area: Content
 */

div.content {
  background-color: #222;
  border-radius: 0 0 1rem 1rem;
  box-shadow: 3px 3px 1px #000;
  min-height:40%;
  padding-bottom: 1rem;
  padding-top: 0.5rem;
}

div.content table[bgcolor="white"] {
  color: #000;
}

/**************************************
 * Main Area: Footer
 */

div.footer {
  color: #ee0;
  font-size: 0.75rem;
  padding: 0;
  text-align: right;
  width: 75%;
}


div.footer div {
  background-color: #222;
  box-shadow: 3px 3px 1px #000;
  border-radius: 0 0 1rem 1rem;
  margin: 0 0 10px 0;
  padding: 0.5rem 0.75rem;
}

div.footer div.page-time {
  float: left;
}

div.footer div.fossil-info {
  float: right;
}

div.footer a, div.footer a:link, div.footer a:visited {
  color: #ee0;
}

div.footer a:hover {
  color: #fff;
  text-shadow: 0px 0px 6px #ee0;
}


/**************************************
 * Check-in
 */

table.label-value th {
  vertical-align: top;
  text-align: right;
  padding: 0.1rem 1rem;
}


/**************************************
 * Diffs
 */

/* Code Added */
span.diffadd {
  background-color: #7f7;
  color: #000;
}

/* Code Changed */
span.diffchng {
  background-color: #77f;
  color: #000;
}

/* Code Deleted */
span.diffrm {
  background-color: #f77;
  color: #000;
}


/**************************************
 * Diffs : Side-By-Side
 */

/* display (column-based) */
table.sbsdiffcols {
  border-spacing: 0;
  font-size: 0.75em;
  width: 90%;
}

table.sbsdiffcols pre {
  border: 0;
  margin: 0;
  padding: 0;
}

table.sbsdiffcols td {
  padding: 0;
  vertical-align: top;
}

/* line number column */
div.difflncol {
  color: #ee0;
  padding-right: 0.75em;
  text-align: right;
}

/* diff text column */
div.difftxtcol {
  background-color: #111;
  overflow-x: auto;
  width: 45em;
}

/* suppressed lines */
span.diffhr {
  display: inline-block;
  margin-bottom: 0.75em;
  color: #ff0;
}

/* diff marker column */
div.diffmkrcol {
  padding: 0 0.5em;
}


/**************************************
 * Diffs : Unified
 */

pre.udiff {
  background-color: #111;
}

/* line numbers */
span.diffln {
  background-color: #222;
  color: #ee0;
}


/**************************************
 * File List : Flat
 */

table.browser {
  width: 100%;
  border: 0;
}

td.browser {
  width: 24%;
  vertical-align: top;
}

ul.browser {
  margin: 0.5rem;
  padding: 0.5rem;
  white-space: nowrap;
}

ul.browser li.dir {
  font-style: italic
}


/**************************************
 * File List : Tree
 */

.filetree {
  line-height: 1.5;
  margin: 1rem 0;
}

/* list */
.filetree ul {
  list-style: none;
  margin: 0;
  padding: 0;
}

/* collapsed list */
.filetree ul.collapsed {
  display: none;
}

/* lists below the root */
.filetree ul ul {
  margin: 0 0 0 21px;
  position: relative;
}

/* lists items */
.filetree li {
  margin: 0;
  padding: 0;
  position: relative;
}

/* node lines */
.filetree li li:before {
  border-bottom: 2px solid #000;
  border-left: 2px solid #000;
  content: '';
  height: 1.5rem;
  left: -14px;
  position: absolute;
  top: -0.8rem;
  width: 14px;
}

/* directory lines */
.filetree li > ul:before {
  border-left: 2px solid #000;
  bottom: 0;
  content: '';
  left: -35px;
  position: absolute;
  top: -1.5rem;
}

/* hide lines for last-child directories */
.filetree li.last > ul:before {
  display: none;
}

.filetree a {
  background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP\/\/\/yEhIf\/\/\/wAAACH5BAEHAAIALAAAAAAQABAAAAIvlIKpxqcfmgOUvoaqDSCxrEEfF14GqFXImJZsu73wepJzVMNxrtNTj3NATMKhpwAAOw==);
  background-position: center left;
  background-repeat: no-repeat;
  display: inline-block;
  min-height: 16px;
  padding-left: 21px;
  position: relative;
  z-index: 1;
}

.filetree .dir > a {
  background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP/WVCIiIv\/\/\/wAAACH5BAEHAAIALAAAAAAQABAAAAInlI9pwa3XYniCgQtkrAFfLXkiFo1jaXpo+jUs6b5Z/K4siDu5RPUFADs=);
  font-style: italic
}


/**************************************
 * Logout
 */

span.loginError {
  color: #f00;
}

table.login_out {
  margin: 10px;
  text-align: left;
}

td.login_out_label {
  text-align: center;
}

div.captcha {
  padding: 1rem;
  text-align: center;
}

table.captcha {
  background-color: #111;
  border-color: #111;
  border-style: inset;
  border-width: 2px;
  margin: auto;
  padding: 0.5rem;
}

table.captcha pre {
  color: #ee0;
}


/**************************************
 * Statistics Reports
 */

.statistics-report-graph-line {
  background-color: #22e;
}

.statistics-report-table-events th {
  padding: 0 1rem;
}

.statistics-report-table-events td {
  padding: 0.1rem 1rem;
}

.statistics-report-row-year {
  color: #ee0;
  text-align: left;
}

.statistics-report-week-number-label {
  font-size: 0.8rem;
  text-align: right;
}

.statistics-report-week-of-year-list {
  font-size: 0.8rem;
}


/**************************************
 * Sections
 */

div.section, div.sectionmenu {
  color: #eee;
  background-color: #22e;
  border-radius: 0 3rem;
  box-shadow: 2px 2px #000;
  display: flex;
  font-size: 1.1rem;
  font-weight: bold;
  justify-content: space-around;
  margin: 1.2rem auto 0.75rem auto;
  padding: 0.2rem;
  text-align: center;
}

div.sectionmenu {
  border-radius: 0 0 3rem 3rem;
  margin-top: -0.75rem;
  width: 75%;
}

div.sectionmenu > a:link, div.sectionmenu > a:visited {
  color: #000;
  text-decoration: none;
}

div.sectionmenu > a:hover {
  color: #eee;
  text-shadow: 0px 0px 6px #eee;
}


/**************************************
 * Sidebox
 */

div.sidebox {
  background-color: #333;
  border-radius: 0.5rem;
  box-shadow: 3px 3px 1px #000;
  float: right;
  margin: 1rem 0.5rem;
  padding: 0.5rem;
}

div.sidebox ol {
  margin: 0 0 0.5rem 2.5rem;
  padding: 0 0;
}

div.sidebox ol li {
  margin-top: 0.75rem;
}

div.sideboxTitle {
  background-color: #ee0;
  border-radius: 0.5rem 0.5rem 0 0;
  color: #000;
  font-weight: bold;
  margin: -0.5rem -0.5rem 0 -0.5rem;
  padding: 0.25rem;
  text-align: center;
}

div.sideboxDescribed {
  display: inline;
}

/* --- Untested : Begin --- */
/* The defined element in sideboxes for branches,.. */
span.disabled {
  color: #f00;
}
/* --- Untested : End --- */


/**************************************
 * Tag
 */

/* --- Untested : Begin --- */
/* the format for the tag links */
a.tagLink {
}
/* the format for the tag display(no history permission!) */
span.tagDsp {
  font-weight: bold;
}
/* the format for fixed/canceled tags,.. */
span.infoTagCancelled {
  font-weight: bold;
  text-decoration: line-through;
}
/* --- Untested : End --- */


/**************************************
 * Ticket
 */

table.report {
  color: #000;
  border: 1px solid #999;
  border-collapse: collapse;
  margin: 1rem 0;
}

table.report tr th {
  color: #eee;
  padding: 3px 5px;
  text-transform : capitalize;
}

table.report tr td {
  padding: 3px 5px;
}

/* example ticket colors */
table.rpteditex {
  border-collapse: collapse;
  border-spacing: 0;
  color: #000;
  float: right;
  margin: 0;
  padding: 0;
  text-align: center;
  width: 125px;
}

td.rpteditex {
  border-color: #000;
  border-style: solid;
  border-width: thin;
}

#reportTable {
}

/* format for labels on ticket display page */
td.tktDspLabel {
  text-align: right;
}

/* format for values on ticket display page */
td.tktDspValue {
  background-color: #111;
  text-align: left;
  vertical-align: top;
}

/* format for ticket error messages */
span.tktError {
  color: #f00;
  font-weight: bold;
}


/**************************************
 * Timeline
 */

#canvas {
  color: #000;
  background-color: #fff;
}

div.divider {
  color: #ee0;
  font-size: 1.2rem;
  font-weight: bold;
  margin-top: 1rem;
  white-space: nowrap;
}

/* The suppressed duplicates lines in timeline, .. */
span.timelineDisabled {
  font-size: 0.5rem;
  font-style: italic;
}

/* the format for the timeline data table */
table.timelineTable {
  border: 0;
}

/* The row in the timeline table that contains the entry of interest */
tr.timelineSelected {
  border: 1px solid #eee;
  border-radius: 1rem;
}

tr.timelineSelected td.timelineTime
, tr.timelineSelected td.timelineTableCell {
  background-color: #333;
  box-shadow: 2px 2px 1px #000;
  padding: 0.5rem;
}

tr.timelineSelected td.timelineTime {
  border-radius: 1rem 0 0 1rem;
}

tr.timelineSelected td.timelineTableCell {
  border-radius: 0 1rem 1rem 0;
}

/* the format for the timeline data cells */
td.timelineTableCell {
  padding: 0.3rem;
  text-align: left;
  vertical-align: top;
}

td.timelineTableCell[style] {
  color: #000;
}

/* the format for the timeline data cell of the current checkout */
tr.timelineCurrent td.timelineTableCell {
  border: 0;
  border-radius: 1em 0em;
}

/* the format for the timeline leaf marks */
span.timelineLeaf {
  font-weight: bold;
}

/* the format for the timeline version links */
a.timelineHistLink {
}

/* the format for the timeline version display(no history permission!) */
span.timelineHistDsp {
  font-weight: bold;
}

/* the format for the timeline time display */
td.timelineTime {
  text-align: right;
  vertical-align: top;
  white-space: nowrap;
}

/* the format for the grap placeholder cells in timelines */
td.timelineGraph {
  text-align: left;
  vertical-align: top;
  width: 20px;
}


/**************************************
 * User Edit
 */

/* layout definition for the capabilities box on the user edit detail page */
div.ueditCapBox {
  float: left;
  margin: 0 20px 20px 0;
}

/* format of the label cells in the detailed user edit page */
td.usetupEditLabel {
  text-align: right;
  vertical-align: top;
  white-space: nowrap;
}

/* color for capabilities, inherited by nobody */
span.ueditInheritNobody {
  color: #0f0;
}

/* color for capabilities, inherited by developer */
span.ueditInheritDeveloper {
  color: #f00;
}

/* color for capabilities, inherited by reader */
span.ueditInheritReader {
  color: black;
}

/* color for capabilities, inherited by anonymous */
span.ueditInheritAnonymous {
  color: #00f;
}

/* format for capabilities */
span.capability {
  font-weight: bold;
}

/* format for different user types */
span.usertype {
  font-weight: bold;
}

span.usertype:before {
  content:"'";
}

span.usertype:after {
  content:"'";
}


/**************************************
 * User List
 */

table.usetupLayoutTable {
  margin: 0.5rem;
  outline-style: none;
  padding: 0;
}

td.usetupColumnLayout {
  vertical-align: top
}

td.usetupColumnLayout ol th {
  padding: 0 0.75rem 0.5rem 0;
}

span.note {
  color: #ee0;
  font-weight: bold;
}

table.usetupUserList {
  margin: 0.5rem;
}

.usetupListUser {
  padding-right: 20px;
  text-align: right;
}

.usetupListCap {
  padding-right: 15px;
  text-align: center;
}

.usetupListCon {
  text-align: left;
}


/**************************************
 * Wiki
 */

span.wikiError {
  font-weight: bold;
  color: #f00;
}

/* the format for fixed/cancelled tags */
span.wikiTagCancelled {
  text-decoration: line-through;
}


/**************************************
 * Did not encounter these
 */

/* selected lines of text within a linenumbered artifact display */
div.selectedText {
  font-weight: bold;
  color: #00f;
  background-color: #d5d5ff;
  border: 1px #00f solid;
}

/* format for missing privileges note on user setup page */
p.missingPriv {
  color: #00f;
}

/* format for leading text in wikirules definitions */
span.wikiruleHead {
  font-weight: bold;
}


/* format for user color input on checkin edit page */
input.checkinUserColor {
  /* no special definitions, class defined, to enable color pickers, 
  * f.e.:
  * ** add the color picker found at http:jscolor.com as java script 
  * include
  * ** to the header and configure the java script file with
  * ** 1. use as bindClass :checkinUserColor
  * ** 2. change the default hash adding behaviour to ON
  * ** or change the class defition of element identified by 
  * id="clrcust"
  * ** to a standard jscolor definition with java script in the footer. 
  * */
}

/* format for end of content area, to be used to clear page flow. */
div.endContent {
  clear: both;
}

/* format for general errors */
p.generalError {
  color: #f00;
}

/* format for tktsetup errors */
p.tktsetupError {
  color: #f00;
  font-weight: bold;
}
/* format for xfersetup errors */
p.xfersetupError {
  color: #f00;
  font-weight: bold;
}
/* format for th script errors */
p.thmainError {
  color: #f00;
  font-weight: bold;
}
/* format for th script trace messages */
span.thTrace {
  color: #f00;
}
/* format for report configuration errors */
p.reportError {
  color: #f00;
  font-weight: bold;
}
/* format for report configuration errors */
blockquote.reportError {
  color: #f00;
  font-weight: bold;
}
/* format for artifact lines, no longer shunned */
p.noMoreShun {
  color: #00f;
}
/* format for artifact lines beeing shunned */
p.shunned {
  color: #00f;
}
/* a broken hyperlink */
span.brokenlink {
  color: #f00;
}
/* List of files in a timeline */
ul.filelist {
  margin-top: 3px;
  line-height: 100%;
}
/* Moderation Pending message on timeline */
span.modpending {
  color: #b30;
  font-style: italic;
}
/* format for textarea labels */
span.textareaLabel {
  font-weight: bold;
}
/* format for th1 script results */
pre.th1result {
  white-space: pre-wrap;
  word-wrap: break-word;
}
/* format for th1 script errors */
pre.th1error {
  white-space: pre-wrap;
  word-wrap: break-word;
  color: #f00;
}

/* even table row color */
tr.row0 {
  /* use default */
}
/* odd table row color */
tr.row1 {
  /* Use default */
}

Added skins/xekri/footer.txt.























>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
</div>
<div class="footer">
<div class="page-time">
Generated in <th1>puts [expr {([utime]+[stime]+1000)/1000*0.001}]</th1>s
</div>
<div class="fossil-info">
Fossil v$release_version $manifest_version
</div>
</div>
</body>
</html>

Added skins/xekri/header.txt.













































































































































































































































































>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
<html>
<head>
<base href="$baseurl/$current_page" />
<title>$<project_name>: $<title></title>
<link rel="alternate" type="application/rss+xml" title="RSS Feed"
      href="$home/timeline.rss" />
<link rel="stylesheet" href="$stylesheet_url" type="text/css"
      media="screen" />
</head>
<body>
<div class="header">
  <div class="logo">
    <th1>
    ##
    ## NOTE: The purpose of this procedure is to take the base URL of the
    ##       Fossil project and return the root of the entire web site using
    ##       the same URI scheme as the base URL (e.g. http or https).
    ##
    proc getLogoUrl { baseurl } {
      set idx(first) [string first // $baseurl]
      if {$idx(first) != -1} {
        ##
        ## NOTE: Skip second slash.
        ##
        set idx(first+1) [expr {$idx(first) + 2}]
        ##
        ## NOTE: (part 1) The [string first] command does NOT actually
        ##       support the optional startIndex argument as specified
        ##       in the TH1 support manual; therefore, we fake it by
        ##       using the [string range] command and then adding the
        ##       necessary offset to the resulting index manually
        ##       (below).  In Tcl, we could use the following instead:
        ##
        ##       set idx(next) [string first / $baseurl $idx(first+1)]
        ##
        set idx(nextRange) [string range $baseurl $idx(first+1) end]
        set idx(next) [string first / $idx(nextRange)]
        if {$idx(next) != -1} {
          ##
          ## NOTE: (part 2) Add the necessary offset to the result of
          ##       the search for the next slash (i.e. the one after
          ##       the initial search for the two slashes).
          ##
          set idx(next) [expr {$idx(next) + $idx(first+1)}]
          ##
          ## NOTE: Back up one character from the next slash.
          ##
          set idx(next-1) [expr {$idx(next) - 1}]
          ##
          ## NOTE: Extract the URI scheme and host from the base URL.
          ##
          set scheme [string range $baseurl 0 $idx(first)]
          set host [string range $baseurl $idx(first+1) $idx(next-1)]
          ##
          ## NOTE: Try to stay in SSL mode if we are there now.
          ##
          if {[string compare $scheme http:/] == 0} {
            set scheme http://
          } else {
            set scheme https://
          }
          set logourl $scheme$host/
        } else {
          set logourl $baseurl
        }
      } else {
        set logourl $baseurl
      }
      return $logourl
    }
    set logourl [getLogoUrl $baseurl]
    </th1>
    <a href="$logourl">
      <img src="$logo_image_url" border="0" alt="$project_name">
    </a>
  </div>
  <div class="title">$<title></div>
  <div class="status"><nobr><th1>
     if {[info exists login]} {
       puts "Logged in as $login"
     } else {
       puts "Not logged in"
     }
  </th1></nobr><small><div id="clock"></div></small></div>
</div>
<script>
function updateClock(){
  var e = document.getElementById("clock");
  if(e){
    var d = new Date();
    function f(n) {
      return n < 10 ? '0' + n : n;
    }
    e.innerHTML = d.getUTCFullYear()+ '-' +
      f(d.getUTCMonth() + 1) + '-' +
      f(d.getUTCDate())      + ' ' +
      f(d.getUTCHours())     + ':' +
      f(d.getUTCMinutes());
    setTimeout("updateClock();",(60-d.getUTCSeconds())*1000);
  }
}
updateClock();
</script>
<div class="mainmenu">
<th1>
html "<a href='$home$index_page'>Home</a>\n"
html "<a href='$home/help'>Help</a>\n"
if {[anycap jor]} {
  html "<a href='$home/timeline'>Timeline</a>\n"
}
if {[anoncap oh]} {
  html "<a href='$home/tree?ci=tip'>Files</a>\n"
}
if {[anoncap o]} {
  html "<a href='$home/brlist'>Branches</a>\n"
  html "<a href='$home/taglist'>Tags</a>\n"
}
if {[anoncap r]} {
  html "<a href='$home/ticket'>Tickets</a>\n"
}
if {[anoncap j]} {
  html "<a href='$home/wiki'>Wiki</a>\n"
}
if {[hascap s]} {
  html "<a href='$home/setup'>Admin</a>\n"
} elseif {[hascap a]} {
  html "<a href='$home/setup_ulist'>Users</a>\n"
}
if {[info exists login]} {
  html "<a href='$home/login'>Logout</a>\n"
} else {
  html "<a href='$home/login'>Login</a>\n"
}
</th1></div>

Changes to src/add.c.

331
332
333
334
335
336
337
338

339
340
341
342
343
344
345
346
347
348

  add_files_in_sfile(vid);
  db_end_transaction(0);
}

/*
** COMMAND: rm
** COMMAND: delete*

**
** Usage: %fossil rm FILE1 ?FILE2 ...?
**    or: %fossil delete FILE1 ?FILE2 ...?
**
** Remove one or more files or directories from the repository.
**
** This command does NOT remove the files from disk.  It just marks the
** files as no longer being part of the project.  In other words, future
** changes to the named files will not be versioned.
**







|
>

|
<







331
332
333
334
335
336
337
338
339
340
341

342
343
344
345
346
347
348

  add_files_in_sfile(vid);
  db_end_transaction(0);
}

/*
** COMMAND: rm
** COMMAND: delete
** COMMAND: forget*
**
** Usage: %fossil rm|delete|forget FILE1 ?FILE2 ...?

**
** Remove one or more files or directories from the repository.
**
** This command does NOT remove the files from disk.  It just marks the
** files as no longer being part of the project.  In other words, future
** changes to the named files will not be versioned.
**
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
**    or: %fossil mv|rename OLDNAME... DIR
**
** Move or rename one or more files or directories within the repository tree.
** You can either rename a file or directory or move it to another subdirectory.
**
** This command does NOT rename or move the files on disk.  This command merely
** records the fact that filenames have changed so that appropriate notations
** can be made at the next commit/checkin.
**
** Options:
**   --case-sensitive <BOOL> Override the case-sensitive setting.
**
** See also: changes, status
*/
void mv_cmd(void){







|







621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
**    or: %fossil mv|rename OLDNAME... DIR
**
** Move or rename one or more files or directories within the repository tree.
** You can either rename a file or directory or move it to another subdirectory.
**
** This command does NOT rename or move the files on disk.  This command merely
** records the fact that filenames have changed so that appropriate notations
** can be made at the next commit/check-in.
**
** Options:
**   --case-sensitive <BOOL> Override the case-sensitive setting.
**
** See also: changes, status
*/
void mv_cmd(void){

Changes to src/attach.c.

95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
    if( moderation_pending(attachid) ){
      @ <span class="modpending">*** Awaiting Moderator Approval ***</span>
    }
    @ <br><a href="%R/attachview?%s(zUrlTail)">%h(zFilename)</a>
    @ [<a href="%R/attachdownload/%t(zFilename)?%s(zUrlTail)">download</a>]<br />
    if( zComment ) while( fossil_isspace(zComment[0]) ) zComment++;
    if( zComment && zComment[0] ){
      @ %!w(zComment)<br />
    }
    if( zPage==0 && zTkt==0 ){
      if( zSrc==0 || zSrc[0]==0 ){
        zSrc = "Deleted from";
      }else {
        zSrc = "Added to";
      }







|







95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
    if( moderation_pending(attachid) ){
      @ <span class="modpending">*** Awaiting Moderator Approval ***</span>
    }
    @ <br><a href="%R/attachview?%s(zUrlTail)">%h(zFilename)</a>
    @ [<a href="%R/attachdownload/%t(zFilename)?%s(zUrlTail)">download</a>]<br />
    if( zComment ) while( fossil_isspace(zComment[0]) ) zComment++;
    if( zComment && zComment[0] ){
      @ %!W(zComment)<br />
    }
    if( zPage==0 && zTkt==0 ){
      if( zSrc==0 || zSrc[0]==0 ){
        zSrc = "Deleted from";
      }else {
        zSrc = "Added to";
      }

Changes to src/branch.c.

157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
  db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid);
  if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){
    fossil_fatal("%s\n", g.zErrMsg);
  }
  assert( blob_is_reset(&branch) );
  content_deltify(rootid, brid, 0);
  zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", brid);
  fossil_print("New branch: %S\n", zUuid);
  if( g.argc==3 ){
    fossil_print(
      "\n"
      "Note: the local check-out has not been updated to the new\n"
      "      branch.  To begin working on the new branch, do this:\n"
      "\n"
      "      %s update %s\n",







|







157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
  db_multi_exec("INSERT OR IGNORE INTO unsent VALUES(%d)", brid);
  if( manifest_crosslink(brid, &branch, MC_PERMIT_HOOKS)==0 ){
    fossil_fatal("%s\n", g.zErrMsg);
  }
  assert( blob_is_reset(&branch) );
  content_deltify(rootid, brid, 0);
  zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", brid);
  fossil_print("New branch: %s\n", zUuid);
  if( g.argc==3 ){
    fossil_print(
      "\n"
      "Note: the local check-out has not been updated to the new\n"
      "      branch.  To begin working on the new branch, do this:\n"
      "\n"
      "      %s update %s\n",
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362

  db_prepare(&q, brlistQuery/*works-like:""*/);
  rNow = db_double(0.0, "SELECT julianday('now')");
  @ <div class="brlist"><table id="branchlisttable">
  @ <thead><tr>
  @ <th>Branch Name</th>
  @ <th>Age</th>
  @ <th>Checkins</th>
  @ <th>Status</th>
  @ <th>Resolution</th>
  @ </tr></thead><tbody>
  while( db_step(&q)==SQLITE_ROW ){
    const char *zBranch = db_column_text(&q, 0);
    double rMtime = db_column_double(&q, 1);
    int isClosed = db_column_int(&q, 2);







|







348
349
350
351
352
353
354
355
356
357
358
359
360
361
362

  db_prepare(&q, brlistQuery/*works-like:""*/);
  rNow = db_double(0.0, "SELECT julianday('now')");
  @ <div class="brlist"><table id="branchlisttable">
  @ <thead><tr>
  @ <th>Branch Name</th>
  @ <th>Age</th>
  @ <th>Check-ins</th>
  @ <th>Status</th>
  @ <th>Resolution</th>
  @ </tr></thead><tbody>
  while( db_step(&q)==SQLITE_ROW ){
    const char *zBranch = db_column_text(&q, 0);
    double rMtime = db_column_double(&q, 1);
    int isClosed = db_column_int(&q, 2);
433
434
435
436
437
438
439

440
441
442
443
444
445
446
447
448
449
450
451
452
453
454

455
456
457
458
459
460
461
  }
  if( !colorTest ){
    style_submenu_element("Color-Test", "Color-Test", "brlist?colortest");
  }else{
    style_submenu_element("All", "All", "brlist?all");
  }
  login_anonymous_available();

  style_sidebox_begin("Nomenclature:", "33%");
  @ <ol>
  @ <li> An <div class="sideboxDescribed">%z(href("brlist"))
  @ open branch</a></div> is a branch that has one or more
  @ <div class="sideboxDescribed">%z(href("leaves"))open leaves.</a></div>
  @ The presence of open leaves presumably means
  @ that the branch is still being extended with new check-ins.</li>
  @ <li> A <div class="sideboxDescribed">%z(href("brlist?closed"))
  @ closed branch</a></div> is a branch with only
  @ <div class="sideboxDescribed">%z(href("leaves?closed"))
  @ closed leaves</a></div>.
  @ Closed branches are fixed and do not change (unless they are first
  @ reopened).</li>
  @ </ol>
  style_sidebox_end();


  branch_prepare_list_query(&q, brFlags);
  cnt = 0;
  while( db_step(&q)==SQLITE_ROW ){
    const char *zBr = db_column_text(&q, 0);
    if( cnt==0 ){
      if( colorTest ){







>















>







433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
  }
  if( !colorTest ){
    style_submenu_element("Color-Test", "Color-Test", "brlist?colortest");
  }else{
    style_submenu_element("All", "All", "brlist?all");
  }
  login_anonymous_available();
#if 0
  style_sidebox_begin("Nomenclature:", "33%");
  @ <ol>
  @ <li> An <div class="sideboxDescribed">%z(href("brlist"))
  @ open branch</a></div> is a branch that has one or more
  @ <div class="sideboxDescribed">%z(href("leaves"))open leaves.</a></div>
  @ The presence of open leaves presumably means
  @ that the branch is still being extended with new check-ins.</li>
  @ <li> A <div class="sideboxDescribed">%z(href("brlist?closed"))
  @ closed branch</a></div> is a branch with only
  @ <div class="sideboxDescribed">%z(href("leaves?closed"))
  @ closed leaves</a></div>.
  @ Closed branches are fixed and do not change (unless they are first
  @ reopened).</li>
  @ </ol>
  style_sidebox_end();
#endif

  branch_prepare_list_query(&q, brFlags);
  cnt = 0;
  while( db_step(&q)==SQLITE_ROW ){
    const char *zBr = db_column_text(&q, 0);
    if( cnt==0 ){
      if( colorTest ){

Changes to src/browse.c.

871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
  }else{
    zClass = mprintf("file");
  }
  return zClass;
}

/*
** SQL used to compute the age of all files in checkin :ckin whose
** names match :glob
*/
static const char zComputeFileAgeSetup[] =
@ CREATE TABLE IF NOT EXISTS temp.fileage(
@   fnid INTEGER PRIMARY KEY,
@   fid INTEGER,
@   mid INTEGER,







|







871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
  }else{
    zClass = mprintf("file");
  }
  return zClass;
}

/*
** SQL used to compute the age of all files in check-in :ckin whose
** names match :glob
*/
static const char zComputeFileAgeSetup[] =
@ CREATE TABLE IF NOT EXISTS temp.fileage(
@   fnid INTEGER PRIMARY KEY,
@   fid INTEGER,
@   mid INTEGER,
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
@  ORDER BY event.mtime ASC;
;

/*
** Look at all file containing in the version "vid".  Construct a
** temporary table named "fileage" that contains the file-id for each
** files, the pathname, the check-in where the file was added, and the
** mtime on that checkin. If zGlob and *zGlob then only files matching
** the given glob are computed.
*/
int compute_fileage(int vid, const char* zGlob){
  Stmt q;
  db_multi_exec(zComputeFileAgeSetup /*works-like:"constant"*/);
  db_prepare(&q, zComputeFileAgeRun  /*works-like:"constant"*/);
  db_bind_int(&q, ":ckin", vid);







|







911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
@  ORDER BY event.mtime ASC;
;

/*
** Look at all file containing in the version "vid".  Construct a
** temporary table named "fileage" that contains the file-id for each
** files, the pathname, the check-in where the file was added, and the
** mtime on that check-in. If zGlob and *zGlob then only files matching
** the given glob are computed.
*/
int compute_fileage(int vid, const char* zGlob){
  Stmt q;
  db_multi_exec(zComputeFileAgeSetup /*works-like:"constant"*/);
  db_prepare(&q, zComputeFileAgeRun  /*works-like:"constant"*/);
  db_bind_int(&q, ":ckin", vid);
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
  db_finalize(&q);
}

/*
** WEBPAGE:  fileage
**
** Parameters:
**   name=VERSION   Selects the checkin version (default=tip).
**   glob=STRING    Only shows files matching this glob pattern
**                  (e.g. *.c or *.txt).
**   showid         Show RID values for debugging
*/
void fileage_page(void){
  int rid;
  const char *zName;
  const char *zGlob;
  const char *zUuid;
  const char *zNow;            /* Time of checkin */
  int showId = PB("showid");
  Stmt q1, q2;
  double baseTime;
  login_check_credentials();
  if( !g.perm.Read ){ login_needed(g.anon.Read); return; }
  zName = P("name");
  if( zName==0 ) zName = "tip";







|









|







985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
  db_finalize(&q);
}

/*
** WEBPAGE:  fileage
**
** Parameters:
**   name=VERSION   Selects the check-in version (default=tip).
**   glob=STRING    Only shows files matching this glob pattern
**                  (e.g. *.c or *.txt).
**   showid         Show RID values for debugging
*/
void fileage_page(void){
  int rid;
  const char *zName;
  const char *zGlob;
  const char *zUuid;
  const char *zNow;            /* Time of check-in */
  int showId = PB("showid");
  Stmt q1, q2;
  double baseTime;
  login_check_credentials();
  if( !g.perm.Read ){ login_needed(g.anon.Read); return; }
  zName = P("name");
  if( zName==0 ) zName = "tip";
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
  @ <h2>Files in
  @ %z(href("%R/info/%!S",zUuid))[%S(zUuid)]</a>
  if( zGlob && zGlob[0] ){
    @ that match "%h(zGlob)" and
  }
  @ ordered by check-in time</h2>
  @
  @ <p>Times are relative to the checkin time for
  @ %z(href("%R/ci/%!S",zUuid))[%S(zUuid)]</a> which is
  @ %z(href("%R/timeline?c=%t",zNow))%s(zNow)</a>.</p>
  @
  @ <div class='fileage'><table>
  @ <tr><th>Time</th><th>Files</th><th>Checkin</th></tr>
  db_prepare(&q1,
    "SELECT event.mtime, event.objid, blob.uuid,\n"
    "       coalesce(event.ecomment,event.comment),\n"
    "       coalesce(event.euser,event.user),\n"
    "       coalesce((SELECT value FROM tagxref\n"
    "                  WHERE tagtype>0 AND tagid=%d\n"
    "                    AND rid=event.objid),'trunk')\n"







|




|







1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
  @ <h2>Files in
  @ %z(href("%R/info/%!S",zUuid))[%S(zUuid)]</a>
  if( zGlob && zGlob[0] ){
    @ that match "%h(zGlob)" and
  }
  @ ordered by check-in time</h2>
  @
  @ <p>Times are relative to the check-in time for
  @ %z(href("%R/ci/%!S",zUuid))[%S(zUuid)]</a> which is
  @ %z(href("%R/timeline?c=%t",zNow))%s(zNow)</a>.</p>
  @
  @ <div class='fileage'><table>
  @ <tr><th>Time</th><th>Files</th><th>Check-in</th></tr>
  db_prepare(&q1,
    "SELECT event.mtime, event.objid, blob.uuid,\n"
    "       coalesce(event.ecomment,event.comment),\n"
    "       coalesce(event.euser,event.user),\n"
    "       coalesce((SELECT value FROM tagxref\n"
    "                  WHERE tagtype>0 AND tagid=%d\n"
    "                    AND rid=event.objid),'trunk')\n"

Changes to src/bundle.c.

26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
**
** The bblob.delta field can be an integer, a text string, or NULL.
** If an integer, then the corresponding blobid is the delta basis.
** If a text string, then that string is a SHA1 hash for the delta
** basis, which is presumably in the master repository.  If NULL, then
** data contains contain without delta compression.
*/
static const char zBundleInit[] = 
@ CREATE TABLE IF NOT EXISTS "%w".bconfig(
@   bcname TEXT,
@   bcvalue ANY
@ );
@ CREATE TABLE IF NOT EXISTS "%w".bblob(
@   blobid INTEGER PRIMARY KEY,      -- Blob ID
@   uuid TEXT NOT NULL,              -- SHA1 hash of expanded blob







|







26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
**
** The bblob.delta field can be an integer, a text string, or NULL.
** If an integer, then the corresponding blobid is the delta basis.
** If a text string, then that string is a SHA1 hash for the delta
** basis, which is presumably in the master repository.  If NULL, then
** data contains contain without delta compression.
*/
static const char zBundleInit[] =
@ CREATE TABLE IF NOT EXISTS "%w".bconfig(
@   bcname TEXT,
@   bcvalue ANY
@ );
@ CREATE TABLE IF NOT EXISTS "%w".bblob(
@   blobid INTEGER PRIMARY KEY,      -- Blob ID
@   uuid TEXT NOT NULL,              -- SHA1 hash of expanded blob
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
static void bundle_append_cmd(void){
  Blob content, hash;
  int i;
  Stmt q;

  verify_all_options();
  bundle_attach_file(g.argv[3], "b1", 1);
  db_prepare(&q, 
    "INSERT INTO bblob(blobid, uuid, sz, delta, data, notes) "
    "VALUES(NULL, $uuid, $sz, NULL, $data, $filename)");
  db_begin_transaction();
  for(i=4; i<g.argc; i++){
    int sz;
    blob_read_from_file(&content, g.argv[i]);
    sz = blob_size(&content);







|







153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
static void bundle_append_cmd(void){
  Blob content, hash;
  int i;
  Stmt q;

  verify_all_options();
  bundle_attach_file(g.argv[3], "b1", 1);
  db_prepare(&q,
    "INSERT INTO bblob(blobid, uuid, sz, delta, data, notes) "
    "VALUES(NULL, $uuid, $sz, NULL, $data, $filename)");
  db_begin_transaction();
  for(i=4; i<g.argc; i++){
    int sz;
    blob_read_from_file(&content, g.argv[i]);
    sz = blob_size(&content);
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
    blob_reset(&hash);
  }
  db_end_transaction(0);
  db_finalize(&q);
}

/*
** Identify a subsection of the checkin tree using command-line switches.
** There must be one of the following switch available:
**
**     --branch BRANCHNAME          All checkins on the most recent
**                                  instance of BRANCHNAME
**     --from TAG1 [--to TAG2]      Checkin TAG1 and all primary descendants
**                                  up to and including TAG2
**     --checkin TAG                Checkin TAG only
**
** Store the RIDs for all applicable checkins in the zTab table that 
** should already exist.  Invoke fossil_fatal() if any kind of error is
** seen.
*/
void subtree_from_arguments(const char *zTab){
  const char *zBr;
  const char *zFrom;
  const char *zTo;







|


|

|

|

|







177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
    blob_reset(&hash);
  }
  db_end_transaction(0);
  db_finalize(&q);
}

/*
** Identify a subsection of the check-in tree using command-line switches.
** There must be one of the following switch available:
**
**     --branch BRANCHNAME          All check-ins on the most recent
**                                  instance of BRANCHNAME
**     --from TAG1 [--to TAG2]      Check-in TAG1 and all primary descendants
**                                  up to and including TAG2
**     --checkin TAG                Check-in TAG only
**
** Store the RIDs for all applicable check-ins in the zTab table that
** should already exist.  Invoke fossil_fatal() if any kind of error is
** seen.
*/
void subtree_from_arguments(const char *zTab){
  const char *zBr;
  const char *zFrom;
  const char *zTo;
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
}

/*
** COMMAND: test-subtree
**
** Usage: %fossil test-subtree ?OPTIONS?
**
** Show the subset of checkins that match the supplied options.  This
** command is used to test the subtree_from_options() subroutine in the
** implementation and does not really have any other practical use that
** we know of.
**
** Options:
**    --branch BRANCH           Include only checkins on BRANCH
**    --from TAG                Start the subtree at TAG
**    --to TAG                  End the subtree at TAG
**    --checkin TAG             The subtree is the single checkin TAG
**    --all                     Include FILE and TAG artifacts
**    --exclusive               Include FILES exclusively on checkins
*/
void test_subtree_cmd(void){
  int bAll = find_option("all",0,0)!=0;
  int bExcl = find_option("exclusive",0,0)!=0;
  db_find_and_open_repository(0,0);
  db_begin_transaction();
  db_multi_exec("CREATE TEMP TABLE tobundle(rid INTEGER PRIMARY KEY);");







|





|


|

|







252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
}

/*
** COMMAND: test-subtree
**
** Usage: %fossil test-subtree ?OPTIONS?
**
** Show the subset of check-ins that match the supplied options.  This
** command is used to test the subtree_from_options() subroutine in the
** implementation and does not really have any other practical use that
** we know of.
**
** Options:
**    --branch BRANCH           Include only check-ins on BRANCH
**    --from TAG                Start the subtree at TAG
**    --to TAG                  End the subtree at TAG
**    --checkin TAG             The subtree is the single check-in TAG
**    --all                     Include FILE and TAG artifacts
**    --exclusive               Include FILES exclusively on check-ins
*/
void test_subtree_cmd(void){
  int bAll = find_option("all",0,0)!=0;
  int bExcl = find_option("exclusive",0,0)!=0;
  db_find_and_open_repository(0,0);
  db_begin_transaction();
  db_multi_exec("CREATE TEMP TABLE tobundle(rid INTEGER PRIMARY KEY);");
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
  assert( pBasis!=0 || iSrc==0 );
  if( iSrc>0 ){
    if( bag_find(&busy, iSrc) ){
      fossil_fatal("delta loop while uncompressing bundle artifacts");
    }
    bag_insert(&busy, iSrc);
  }
  db_prepare(&q, 
     "SELECT uuid, data, bblob.delta, bix.blobid"
     "  FROM bix, bblob"
     " WHERE bix.delta=%d"
     "   AND bix.blobid=bblob.blobid;",
     iSrc
  );
  while( db_step(&q)==SQLITE_ROW ){







|







429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
  assert( pBasis!=0 || iSrc==0 );
  if( iSrc>0 ){
    if( bag_find(&busy, iSrc) ){
      fossil_fatal("delta loop while uncompressing bundle artifacts");
    }
    bag_insert(&busy, iSrc);
  }
  db_prepare(&q,
     "SELECT uuid, data, bblob.delta, bix.blobid"
     "  FROM bix, bblob"
     " WHERE bix.delta=%d"
     "   AND bix.blobid=bblob.blobid;",
     iSrc
  );
  while( db_step(&q)==SQLITE_ROW ){
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
    "   WHERE NOT EXISTS(SELECT 1 FROM blob WHERE uuid=bblob.uuid AND size>=0);"
    "CREATE TEMP TABLE got(rid INTEGER PRIMARY KEY ON CONFLICT IGNORE);"
  );
  manifest_crosslink_begin();
  bundle_import_elements(0, 0, isPriv);
  manifest_crosslink_end(0);
  describe_artifacts_to_stdout("IN got", "Imported content:");
  db_end_transaction(0);    
}

/* fossil bundle purge BUNDLE
**
** Try to undo a prior "bundle import BUNDLE".
**
** If the --force option is omitted, then this will only work if
** there have been no checkins or tags added that use the import.
**
** This routine never removes content that is not already in the bundle
** so the bundle serves as a backup.  The purge can be undone using
** "fossil bundle import BUNDLE".
*/
static void bundle_purge_cmd(void){
  int bForce = find_option("force",0,0)!=0;
  int bTest = find_option("test",0,0)!=0;  /* Undocumented --test option */
  const char *zFile = g.argv[3];
  verify_all_options();
  if ( g.argc!=4 ) usage("purge BUNDLE ?OPTIONS?");
  bundle_attach_file(zFile, "b1", 0);
  db_begin_transaction();

  /* Find all checkins of the bundle */
  db_multi_exec(
    "CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY);"
    "INSERT OR IGNORE INTO ok SELECT blob.rid FROM bblob, blob, plink"
    " WHERE bblob.uuid=blob.uuid"
    "   AND plink.cid=blob.rid;"
  );

  /* Check to see if new checkins have been committed to checkins in
  ** the bundle.  Do not allow the purge if that is true and if --force
  ** is omitted.
  */
  if( !bForce ){
    Stmt q;
    int n = 0;
    db_prepare(&q,
      "SELECT cid FROM plink WHERE pid IN ok AND cid NOT IN ok"
    );
    while( db_step(&q)==SQLITE_ROW ){
      whatis_rid(db_column_int(&q,0),0);
      fossil_print("%.78c\n", '-');
      n++;
    }
    db_finalize(&q);
    if( n>0 ){
      fossil_fatal("checkins above are derived from checkins in the bundle.");
    }
  }

  /* Find all files associated with those check-ins that are used
  ** nowhere else. */
  find_checkin_associates("ok", 1);

  /* Check to see if any associated files are not in the bundle.  Issue
  ** an error if there are any, unless --force is used.
  */
  if( !bForce ){
    db_multi_exec(
       "CREATE TEMP TABLE err1(rid INTEGER PRIMARY KEY);"
       "INSERT INTO err1 "
       " SELECT blob.rid FROM ok CROSS JOIN blob"
       "     WHERE blob.rid=ok.rid"
       "       AND blob.uuid NOT IN (SELECT uuid FROM bblob);"
    );
    if( db_changes() ){
      describe_artifacts_to_stdout("IN err1", 0);
      fossil_fatal("artifacts above associated with bundle checkins "
                   " are not in the bundle");
    }else{
      db_multi_exec("DROP TABLE err1;");
    }
  }

  if( bTest ){







|







|














|







|
















|




















|







621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
    "   WHERE NOT EXISTS(SELECT 1 FROM blob WHERE uuid=bblob.uuid AND size>=0);"
    "CREATE TEMP TABLE got(rid INTEGER PRIMARY KEY ON CONFLICT IGNORE);"
  );
  manifest_crosslink_begin();
  bundle_import_elements(0, 0, isPriv);
  manifest_crosslink_end(0);
  describe_artifacts_to_stdout("IN got", "Imported content:");
  db_end_transaction(0);
}

/* fossil bundle purge BUNDLE
**
** Try to undo a prior "bundle import BUNDLE".
**
** If the --force option is omitted, then this will only work if
** there have been no check-ins or tags added that use the import.
**
** This routine never removes content that is not already in the bundle
** so the bundle serves as a backup.  The purge can be undone using
** "fossil bundle import BUNDLE".
*/
static void bundle_purge_cmd(void){
  int bForce = find_option("force",0,0)!=0;
  int bTest = find_option("test",0,0)!=0;  /* Undocumented --test option */
  const char *zFile = g.argv[3];
  verify_all_options();
  if ( g.argc!=4 ) usage("purge BUNDLE ?OPTIONS?");
  bundle_attach_file(zFile, "b1", 0);
  db_begin_transaction();

  /* Find all check-ins of the bundle */
  db_multi_exec(
    "CREATE TEMP TABLE ok(rid INTEGER PRIMARY KEY);"
    "INSERT OR IGNORE INTO ok SELECT blob.rid FROM bblob, blob, plink"
    " WHERE bblob.uuid=blob.uuid"
    "   AND plink.cid=blob.rid;"
  );

  /* Check to see if new check-ins have been committed to check-ins in
  ** the bundle.  Do not allow the purge if that is true and if --force
  ** is omitted.
  */
  if( !bForce ){
    Stmt q;
    int n = 0;
    db_prepare(&q,
      "SELECT cid FROM plink WHERE pid IN ok AND cid NOT IN ok"
    );
    while( db_step(&q)==SQLITE_ROW ){
      whatis_rid(db_column_int(&q,0),0);
      fossil_print("%.78c\n", '-');
      n++;
    }
    db_finalize(&q);
    if( n>0 ){
      fossil_fatal("check-ins above are derived from check-ins in the bundle.");
    }
  }

  /* Find all files associated with those check-ins that are used
  ** nowhere else. */
  find_checkin_associates("ok", 1);

  /* Check to see if any associated files are not in the bundle.  Issue
  ** an error if there are any, unless --force is used.
  */
  if( !bForce ){
    db_multi_exec(
       "CREATE TEMP TABLE err1(rid INTEGER PRIMARY KEY);"
       "INSERT INTO err1 "
       " SELECT blob.rid FROM ok CROSS JOIN blob"
       "     WHERE blob.rid=ok.rid"
       "       AND blob.uuid NOT IN (SELECT uuid FROM bblob);"
    );
    if( db_changes() ){
      describe_artifacts_to_stdout("IN err1", 0);
      fossil_fatal("artifacts above associated with bundle check-ins "
                   " are not in the bundle");
    }else{
      db_multi_exec("DROP TABLE err1;");
    }
  }

  if( bTest ){
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790


791
792
793
794
795
796
797
**      consecutively on standard output.  This subcommand was designed
**      for testing and introspection of bundles and is not something
**      commonly used.
**
**   fossil bundle export BUNDLE ?OPTIONS?
**
**      Generate a new bundle, in the file named BUNDLE, that contains a
**      subset of the checkins in the repository (usually a single branch)
**      described by the --branch, --from, --to, and/or --checkin options,
**      at least one of which is required.  If BUNDLE already exists, the
**      specified content is added to the bundle.
**
**         --branch BRANCH            Package all check-ins on BRANCH.
**         --from TAG1 --to TAG2      Package checkins between TAG1 and TAG2.
**         --checkin TAG              Package the single checkin TAG
**         --standalone               Do no use delta-encoding against
**                                      artifacts not in the bundle
**
**   fossil bundle extend BUNDLE
**
**      The BUNDLE must already exist.  This subcommand adds to the bundle
**      any checkins that are descendants of checkins already in the bundle,
**      and any tags that apply to artifacts in the bundle.
**
**   fossil bundle import BUNDLE ?--publish?
**
**      Import all content from BUNDLE into the repository.  By default, the
**      imported files are private and will not sync.  Use the --publish
**      option makes the import public.
**
**   fossil bundle ls BUNDLE
**
**      List the contents of BUNDLE on standard output
**
**   fossil bundle purge BUNDLE
**
**      Remove from the repository all files that are used exclusively 
**      by checkins in BUNDLE.  This has the effect of undoing a
**      "fossil bundle import".
**
** SUMMARY:
**   fossil bundle append BUNDLE FILE...              Add files to BUNDLE
**   fossil bundle cat BUNDLE UUID...                 Extract file from BUNDLE
**   fossil bundle export BUNDLE ?OPTIONS?            Create a new BUNDLE
**          --branch BRANCH --from TAG1 --to TAG2       Checkins to include
**          --checkin TAG                               Use only checkin TAG
**          --standalone                                Omit dependencies
**   fossil bundle extend BUNDLE                      Update with newer content
**   fossil bundle import BUNDLE ?OPTIONS?            Import a bundle
**          --publish                                   Publish the import
**          --force                                     Cross-repo import
**   fossil bundle ls BUNDLE                          List content of a bundle
**   fossil bundle purge BUNDLE                       Undo an import


*/
void bundle_cmd(void){
  const char *zSubcmd;
  int n;
  if( g.argc<4 ) usage("SUBCOMMAND BUNDLE ?OPTIONS?");
  zSubcmd = g.argv[2];
  db_find_and_open_repository(0,0);







|





|
|






|














|
|






|
|







>
>







738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
**      consecutively on standard output.  This subcommand was designed
**      for testing and introspection of bundles and is not something
**      commonly used.
**
**   fossil bundle export BUNDLE ?OPTIONS?
**
**      Generate a new bundle, in the file named BUNDLE, that contains a
**      subset of the check-ins in the repository (usually a single branch)
**      described by the --branch, --from, --to, and/or --checkin options,
**      at least one of which is required.  If BUNDLE already exists, the
**      specified content is added to the bundle.
**
**         --branch BRANCH            Package all check-ins on BRANCH.
**         --from TAG1 --to TAG2      Package check-ins between TAG1 and TAG2.
**         --checkin TAG              Package the single check-in TAG
**         --standalone               Do no use delta-encoding against
**                                      artifacts not in the bundle
**
**   fossil bundle extend BUNDLE
**
**      The BUNDLE must already exist.  This subcommand adds to the bundle
**      any check-ins that are descendants of check-ins already in the bundle,
**      and any tags that apply to artifacts in the bundle.
**
**   fossil bundle import BUNDLE ?--publish?
**
**      Import all content from BUNDLE into the repository.  By default, the
**      imported files are private and will not sync.  Use the --publish
**      option makes the import public.
**
**   fossil bundle ls BUNDLE
**
**      List the contents of BUNDLE on standard output
**
**   fossil bundle purge BUNDLE
**
**      Remove from the repository all files that are used exclusively
**      by check-ins in BUNDLE.  This has the effect of undoing a
**      "fossil bundle import".
**
** SUMMARY:
**   fossil bundle append BUNDLE FILE...              Add files to BUNDLE
**   fossil bundle cat BUNDLE UUID...                 Extract file from BUNDLE
**   fossil bundle export BUNDLE ?OPTIONS?            Create a new BUNDLE
**          --branch BRANCH --from TAG1 --to TAG2       Check-ins to include
**          --checkin TAG                               Use only check-in TAG
**          --standalone                                Omit dependencies
**   fossil bundle extend BUNDLE                      Update with newer content
**   fossil bundle import BUNDLE ?OPTIONS?            Import a bundle
**          --publish                                   Publish the import
**          --force                                     Cross-repo import
**   fossil bundle ls BUNDLE                          List content of a bundle
**   fossil bundle purge BUNDLE                       Undo an import
**
** See also: publish
*/
void bundle_cmd(void){
  const char *zSubcmd;
  int n;
  if( g.argc<4 ) usage("SUBCOMMAND BUNDLE ?OPTIONS?");
  zSubcmd = g.argv[2];
  db_find_and_open_repository(0,0);

Changes to src/checkin.c.

1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
**    --mimetype MIMETYPE        mimetype of check-in comment
**    -n|--dry-run               If given, display instead of run actions
**    --no-warnings              omit all warnings about file contents
**    --nosign                   do not attempt to sign this commit with gpg
**    --private                  do not sync changes and their descendants
**    --sha1sum                  verify file status using SHA1 hashing rather
**                               than relying on file mtimes
**    --tag TAG-NAME             assign given tag TAG-NAME to the checkin
**
** See also: branch, changes, checkout, extras, sync
*/
void commit_cmd(void){
  int hasChanges;        /* True if unsaved changes exist */
  int vid;               /* blob-id of parent version */
  int nrid;              /* blob-id of a modified file */







|







1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
**    --mimetype MIMETYPE        mimetype of check-in comment
**    -n|--dry-run               If given, display instead of run actions
**    --no-warnings              omit all warnings about file contents
**    --nosign                   do not attempt to sign this commit with gpg
**    --private                  do not sync changes and their descendants
**    --sha1sum                  verify file status using SHA1 hashing rather
**                               than relying on file mtimes
**    --tag TAG-NAME             assign given tag TAG-NAME to the check-in
**
** See also: branch, changes, checkout, extras, sync
*/
void commit_cmd(void){
  int hasChanges;        /* True if unsaved changes exist */
  int vid;               /* blob-id of parent version */
  int nrid;              /* blob-id of a modified file */
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
  }

  /*
  ** Do not allow a commit against a closed leaf unless the commit
  ** ends up on a different branch.
  */
  if(
      /* parent checkin has the "closed" tag... */
      db_exists("SELECT 1 FROM tagxref"
                " WHERE tagid=%d AND rid=%d AND tagtype>0",
                TAG_CLOSED, vid)
      /* ... and the new checkin has no --branch option or the --branch
      ** option does not actually change the branch */
   && (sCiInfo.zBranch==0
       || db_exists("SELECT 1 FROM tagxref"
                    " WHERE tagid=%d AND rid=%d AND tagtype>0"
                    "   AND value=%Q", TAG_BRANCH, vid, sCiInfo.zBranch))
  ){
    fossil_fatal("cannot commit against a closed leaf");







|



|







1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
  }

  /*
  ** Do not allow a commit against a closed leaf unless the commit
  ** ends up on a different branch.
  */
  if(
      /* parent check-in has the "closed" tag... */
      db_exists("SELECT 1 FROM tagxref"
                " WHERE tagid=%d AND rid=%d AND tagtype>0",
                TAG_CLOSED, vid)
      /* ... and the new check-in has no --branch option or the --branch
      ** option does not actually change the branch */
   && (sCiInfo.zBranch==0
       || db_exists("SELECT 1 FROM tagxref"
                    " WHERE tagid=%d AND rid=%d AND tagtype>0"
                    "   AND value=%Q", TAG_BRANCH, vid, sCiInfo.zBranch))
  ){
    fossil_fatal("cannot commit against a closed leaf");
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
      pBaseline = pParent;
    }
    if( pBaseline ){
      Blob delta;
      create_manifest(&delta, zBaselineUuid, pBaseline, vid, &sCiInfo, &szD);
      /*
      ** At this point, two manifests have been constructed, either of
      ** which would work for this checkin.  The first manifest (held
      ** in the "manifest" variable) is a baseline manifest and the second
      ** (held in variable named "delta") is a delta manifest.  The
      ** question now is: which manifest should we use?
      **
      ** Let B be the number of F-cards in the baseline manifest and
      ** let D be the number of F-cards in the delta manifest, plus one for
      ** the B-card.  (B is held in the szB variable and D is held in the







|







1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
      pBaseline = pParent;
    }
    if( pBaseline ){
      Blob delta;
      create_manifest(&delta, zBaselineUuid, pBaseline, vid, &sCiInfo, &szD);
      /*
      ** At this point, two manifests have been constructed, either of
      ** which would work for this check-in.  The first manifest (held
      ** in the "manifest" variable) is a baseline manifest and the second
      ** (held in variable named "delta") is a delta manifest.  The
      ** question now is: which manifest should we use?
      **
      ** Let B be the number of F-cards in the baseline manifest and
      ** let D be the number of F-cards in the delta manifest, plus one for
      ** the B-card.  (B is held in the szB variable and D is held in the
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
  zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nvid);

  db_prepare(&q, "SELECT uuid,merge FROM vmerge JOIN blob ON merge=rid"
                 " WHERE id=-4");
  while( db_step(&q)==SQLITE_ROW ){
    const char *zIntegrateUuid = db_column_text(&q, 0);
    if( is_a_leaf(db_column_int(&q, 1)) ){
      fossil_print("Closed: %S\n", zIntegrateUuid);
    }else{
      fossil_print("Not_Closed: %S (not a leaf any more)\n", zIntegrateUuid);
    }
  }
  db_finalize(&q);

  fossil_print("New_Version: %S\n", zUuid);
  if( outputManifest ){
    zManifestFile = mprintf("%smanifest.uuid", g.zLocalRoot);
    blob_zero(&muuid);
    blob_appendf(&muuid, "%s\n", zUuid);
    blob_write_to_file(&muuid, zManifestFile);
    free(zManifestFile);
    blob_reset(&muuid);







|

|




|







1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
  zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", nvid);

  db_prepare(&q, "SELECT uuid,merge FROM vmerge JOIN blob ON merge=rid"
                 " WHERE id=-4");
  while( db_step(&q)==SQLITE_ROW ){
    const char *zIntegrateUuid = db_column_text(&q, 0);
    if( is_a_leaf(db_column_int(&q, 1)) ){
      fossil_print("Closed: %s\n", zIntegrateUuid);
    }else{
      fossil_print("Not_Closed: %s (not a leaf any more)\n", zIntegrateUuid);
    }
  }
  db_finalize(&q);

  fossil_print("New_Version: %s\n", zUuid);
  if( outputManifest ){
    zManifestFile = mprintf("%smanifest.uuid", g.zLocalRoot);
    blob_zero(&muuid);
    blob_appendf(&muuid, "%s\n", zUuid);
    blob_write_to_file(&muuid, zManifestFile);
    free(zManifestFile);
    blob_reset(&muuid);
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
    " WHERE is_selected(id);"
    , vid, nvid
  );
  db_lset_int("checkout", nvid);

  if( useCksum ){
    /* Verify that the repository checksum matches the expected checksum
    ** calculated before the checkin started (and stored as the R record
    ** of the manifest file).
    */
    vfile_aggregate_checksum_repository(nvid, &cksum2);
    if( blob_compare(&cksum1, &cksum2) ){
      vfile_compare_repository_to_disk(nvid);
      fossil_fatal("working checkout does not match what would have ended "
                   "up in the repository:  %b versus %b",







|







1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
    " WHERE is_selected(id);"
    , vid, nvid
  );
  db_lset_int("checkout", nvid);

  if( useCksum ){
    /* Verify that the repository checksum matches the expected checksum
    ** calculated before the check-in started (and stored as the R record
    ** of the manifest file).
    */
    vfile_aggregate_checksum_repository(nvid, &cksum2);
    if( blob_compare(&cksum1, &cksum2) ){
      vfile_compare_repository_to_disk(nvid);
      fossil_fatal("working checkout does not match what would have ended "
                   "up in the repository:  %b versus %b",

Changes to src/clone.c.

199
200
201
202
203
204
205









206
207
208
209
210
211
212
213
214
215
216
217
      fossil_fatal("server returned an error - clone aborted");
    }
    db_open_repository(g.argv[3]);
  }
  db_begin_transaction();
  fossil_print("Rebuilding repository meta-data...\n");
  rebuild_db(0, 1, 0);









  fossil_print("project-id: %s\n", db_get("project-code", 0));
  fossil_print("server-id:  %s\n", db_get("server-code", 0));
  zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin);
  fossil_print("admin-user: %s (password is \"%s\")\n", g.zLogin, zPassword);
  db_end_transaction(0);
}

/*
** If user chooses to use HTTP Authentication over unencrypted HTTP,
** remember decision.  Otherwise, if the URL is being changed and no
** preference has been indicated, err on the safe side and revert the
** decision. Set the global preference if the URL is not being changed.







>
>
>
>
>
>
>
>
>
|



<







199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218

219
220
221
222
223
224
225
      fossil_fatal("server returned an error - clone aborted");
    }
    db_open_repository(g.argv[3]);
  }
  db_begin_transaction();
  fossil_print("Rebuilding repository meta-data...\n");
  rebuild_db(0, 1, 0);
  fossil_print("Extra delta compression... "); fflush(stdout);
  extra_deltification();
  db_end_transaction(0);
  fossil_print("\nVacuuming the database... "); fflush(stdout);
  if( db_int(0, "PRAGMA page_count")>1000 
   && db_int(0, "PRAGMA page_size")<8192 ){
     db_multi_exec("PRAGMA page_size=8192;");
  }
  db_multi_exec("VACUUM");
  fossil_print("\nproject-id: %s\n", db_get("project-code", 0));
  fossil_print("server-id:  %s\n", db_get("server-code", 0));
  zPassword = db_text(0, "SELECT pw FROM user WHERE login=%Q", g.zLogin);
  fossil_print("admin-user: %s (password is \"%s\")\n", g.zLogin, zPassword);

}

/*
** If user chooses to use HTTP Authentication over unencrypted HTTP,
** remember decision.  Otherwise, if the URL is being changed and no
** preference has been indicated, err on the safe side and revert the
** decision. Set the global preference if the URL is not being changed.

Changes to src/db.c.

1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
** their associated permissions will not be copied; however, the system
** default users "anonymous", "nobody", "reader", "developer", and their
** associated permissions will be copied.
**
** Options:
**    --template      FILE      copy settings from repository file
**    --admin-user|-A USERNAME  select given USERNAME as admin user
**    --date-override DATETIME  use DATETIME as time of the initial checkin
**                              (default: do not create an initial checkin)
**
** See also: clone
*/
void create_repository_cmd(void){
  char *zPassword;
  const char *zTemplate;      /* Repository from which to copy settings */
  const char *zDate;          /* Date of the initial check-in */







|
|







1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
** their associated permissions will not be copied; however, the system
** default users "anonymous", "nobody", "reader", "developer", and their
** associated permissions will be copied.
**
** Options:
**    --template      FILE      copy settings from repository file
**    --admin-user|-A USERNAME  select given USERNAME as admin user
**    --date-override DATETIME  use DATETIME as time of the initial check-in
**                              (default: do not create an initial check-in)
**
** See also: clone
*/
void create_repository_cmd(void){
  char *zPassword;
  const char *zTemplate;      /* Repository from which to copy settings */
  const char *zDate;          /* Date of the initial check-in */

Changes to src/descendants.c.

28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
** already exist.  Load this table with the RID of all
** check-ins that are leaves which are descended from
** check-in iBase.
**
** A "leaf" is a check-in that has no children in the same branch.
** There is a separate permanent table LEAF that contains all leaves
** in the tree.  This routine is used to compute a subset of that
** table consisting of leaves that are descended from a single checkin.
**
** The closeMode flag determines behavior associated with the "closed"
** tag:
**
**    closeMode==0       Show all leaves regardless of the "closed" tag.
**
**    closeMode==1       Show only leaves without the "closed" tag.







|







28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
** already exist.  Load this table with the RID of all
** check-ins that are leaves which are descended from
** check-in iBase.
**
** A "leaf" is a check-in that has no children in the same branch.
** There is a separate permanent table LEAF that contains all leaves
** in the tree.  This routine is used to compute a subset of that
** table consisting of leaves that are descended from a single check-in.
**
** The closeMode flag determines behavior associated with the "closed"
** tag:
**
**    closeMode==0       Show all leaves regardless of the "closed" tag.
**
**    closeMode==1       Show only leaves without the "closed" tag.
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
}

/*
** COMMAND: descendants*
**
** Usage: %fossil descendants ?CHECKIN? ?OPTIONS?
**
** Find all leaf descendants of the checkin specified or if the argument
** is omitted, of the checkin currently checked out.
**
** Options:
**    -R|--repository FILE       Extract info from repository FILE
**    -W|--width <num>           Width of lines (default is to auto-detect).
**                               Must be >20 or 0 (= no limit, resulting in a
**                               single line per entry).
**







|
|







289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
}

/*
** COMMAND: descendants*
**
** Usage: %fossil descendants ?CHECKIN? ?OPTIONS?
**
** Find all leaf descendants of the check-in specified or if the argument
** is omitted, of the check-in currently checked out.
**
** Options:
**    -R|--repository FILE       Extract info from repository FILE
**    -W|--width <num>           Width of lines (default is to auto-detect).
**                               Must be >20 or 0 (= no limit, resulting in a
**                               single line per entry).
**
450
451
452
453
454
455
456

457
458
459
460
461
462
463
464
465
466
467
468

469
470
471
472
473
474
475
    style_submenu_element("Closed", "Closed", "leaves?closed");
  }
  if( showClosed || showAll ){
    style_submenu_element("Open", "Open", "leaves");
  }
  style_header("Leaves");
  login_anonymous_available();

  style_sidebox_begin("Nomenclature:", "33%");
  @ <ol>
  @ <li> A <div class="sideboxDescribed">leaf</div>
  @ is a check-in with no descendants in the same branch.</li>
  @ <li> An <div class="sideboxDescribed">open leaf</div>
  @ is a leaf that does not have a "closed" tag
  @ and is thus assumed to still be in use.</li>
  @ <li> A <div class="sideboxDescribed">closed leaf</div>
  @ has a "closed" tag and is thus assumed to
  @ be historical and no longer in active use.</li>
  @ </ol>
  style_sidebox_end();


  if( showAll ){
    @ <h1>All leaves, both open and closed:</h1>
  }else if( showClosed ){
    @ <h1>Closed leaves:</h1>
  }else{
    @ <h1>Open leaves:</h1>







>












>







450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
    style_submenu_element("Closed", "Closed", "leaves?closed");
  }
  if( showClosed || showAll ){
    style_submenu_element("Open", "Open", "leaves");
  }
  style_header("Leaves");
  login_anonymous_available();
#if 0
  style_sidebox_begin("Nomenclature:", "33%");
  @ <ol>
  @ <li> A <div class="sideboxDescribed">leaf</div>
  @ is a check-in with no descendants in the same branch.</li>
  @ <li> An <div class="sideboxDescribed">open leaf</div>
  @ is a leaf that does not have a "closed" tag
  @ and is thus assumed to still be in use.</li>
  @ <li> A <div class="sideboxDescribed">closed leaf</div>
  @ has a "closed" tag and is thus assumed to
  @ be historical and no longer in active use.</li>
  @ </ol>
  style_sidebox_end();
#endif

  if( showAll ){
    @ <h1>All leaves, both open and closed:</h1>
  }else if( showClosed ){
    @ <h1>Closed leaves:</h1>
  }else{
    @ <h1>Open leaves:</h1>

Changes to src/diff.c.

2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
** COMMAND: praise
**
** %fossil (annotate|blame|praise) ?OPTIONS? FILENAME
**
** Output the text of a file with markings to show when each line of
** the file was last modified.  The "annotate" command shows line numbers
** and omits the username.  The "blame" and "praise" commands show the user
** who made each checkin and omits the line number.
**
** Options:
**   --filevers                 Show file version numbers rather than check-in versions
**   -l|--log                   List all versions analyzed
**   -n|--limit N               Only look backwards in time by N versions
**   -w|--ignore-all-space      Ignore white space when comparing lines
**   -Z|--ignore-trailing-space Ignore whitespace at line end







|







2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
** COMMAND: praise
**
** %fossil (annotate|blame|praise) ?OPTIONS? FILENAME
**
** Output the text of a file with markings to show when each line of
** the file was last modified.  The "annotate" command shows line numbers
** and omits the username.  The "blame" and "praise" commands show the user
** who made each check-in and omits the line number.
**
** Options:
**   --filevers                 Show file version numbers rather than check-in versions
**   -l|--log                   List all versions analyzed
**   -n|--limit N               Only look backwards in time by N versions
**   -w|--ignore-all-space      Ignore white space when comparing lines
**   -Z|--ignore-trailing-space Ignore whitespace at line end

Changes to src/diff.tcl.

236
237
238
239
240
241
242

243

244

245

246

247

248

249
250
251
252
253
254
255
bind . <Return> {
  event generate .bb.files <1>
  event generate .bb.files <ButtonRelease-1>
  break
}
foreach {key axis args} {
  Up    y {scroll -5 units}

  Down  y {scroll 5 units}

  Left  x {scroll -5 units}

  Right x {scroll 5 units}

  Prior y {scroll -1 page}

  Next  y {scroll 1 page}

  Home  y {moveto 0}

  End   y {moveto 1}
} {
  bind . <$key> "scroll-$axis $args; break"
  bind . <Shift-$key> continue
}

frame .bb







>

>

>

>

>

>

>







236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
bind . <Return> {
  event generate .bb.files <1>
  event generate .bb.files <ButtonRelease-1>
  break
}
foreach {key axis args} {
  Up    y {scroll -5 units}
  k     y {scroll -5 units}
  Down  y {scroll 5 units}
  j     y {scroll 5 units}
  Left  x {scroll -5 units}
  h     x {scroll -5 units}
  Right x {scroll 5 units}
  l     x {scroll 5 units}
  Prior y {scroll -1 page}
  b     y {scroll -1 page}
  Next  y {scroll 1 page}
  space y {scroll 1 page}
  Home  y {moveto 0}
  g     y {moveto 0}
  End   y {moveto 1}
} {
  bind . <$key> "scroll-$axis $args; break"
  bind . <Shift-$key> continue
}

frame .bb

Changes to src/doc.c.

460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
      if( seenClass ) return 1;
    }
  }
  return seenClass;
}

/*
** Look for a file named zName in the checkin with RID=vid.  Load the content
** of that file into pContent and return the RID for the file.  Or return 0
** if the file is not found or could not be loaded.
*/
int doc_load_content(int vid, const char *zName, Blob *pContent){
  int rid;   /* The RID of the file being loaded */
  if( !db_table_exists("repository","vcache") ){
    db_multi_exec(
      "CREATE TABLE IF NOT EXISTS vcache(\n"
      "  vid INTEGER,         -- checkin ID\n"
      "  fname TEXT,          -- filename\n"
      "  rid INTEGER,         -- artifact ID\n"
      "  PRIMARY KEY(vid,fname)\n"
      ") WITHOUT ROWID"
    );
  }
  if( !db_exists("SELECT 1 FROM vcache WHERE vid=%d", vid) ){







|








|







460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
      if( seenClass ) return 1;
    }
  }
  return seenClass;
}

/*
** Look for a file named zName in the check-in with RID=vid.  Load the content
** of that file into pContent and return the RID for the file.  Or return 0
** if the file is not found or could not be loaded.
*/
int doc_load_content(int vid, const char *zName, Blob *pContent){
  int rid;   /* The RID of the file being loaded */
  if( !db_table_exists("repository","vcache") ){
    db_multi_exec(
      "CREATE TABLE IF NOT EXISTS vcache(\n"
      "  vid INTEGER,         -- check-in ID\n"
      "  fname TEXT,          -- filename\n"
      "  rid INTEGER,         -- artifact ID\n"
      "  PRIMARY KEY(vid,fname)\n"
      ") WITHOUT ROWID"
    );
  }
  if( !db_exists("SELECT 1 FROM vcache WHERE vid=%d", vid) ){
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
** then FILE is completely replaced by "404.md" and tried.  If that is
** not found, then a default 404 screen is generated.
*/
void doc_page(void){
  const char *zName;                /* Argument to the /doc page */
  const char *zOrigName = "?";      /* Original document name */
  const char *zMime;                /* Document MIME type */
  char *zCheckin = "tip";           /* The checkin holding the document */
  int vid = 0;                      /* Artifact of checkin */
  int rid = 0;                      /* Artifact of file */
  int i;                            /* Loop counter */
  Blob filebody;                    /* Content of the documentation file */
  Blob title;                       /* Document title */
  int nMiss = (-1);                 /* Failed attempts to find the document */
  static const char *const azSuffix[] = {
     "index.html", "index.wiki", "index.md"







|
|







529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
** then FILE is completely replaced by "404.md" and tried.  If that is
** not found, then a default 404 screen is generated.
*/
void doc_page(void){
  const char *zName;                /* Argument to the /doc page */
  const char *zOrigName = "?";      /* Original document name */
  const char *zMime;                /* Document MIME type */
  char *zCheckin = "tip";           /* The check-in holding the document */
  int vid = 0;                      /* Artifact of check-in */
  int rid = 0;                      /* Artifact of file */
  int i;                            /* Loop counter */
  Blob filebody;                    /* Content of the documentation file */
  Blob title;                       /* Document title */
  int nMiss = (-1);                 /* Failed attempts to find the document */
  static const char *const azSuffix[] = {
     "index.html", "index.wiki", "index.md"

Changes to src/finfo.c.

41
42
43
44
45
46
47
48

49
50
51
52
53
54
55
** to stdout.  The -p mode is another form of the "cat" command.
**
** Options:
**   -b|--brief           display a brief (one line / revision) summary
**   --case-sensitive B   Enable or disable case-sensitive filenames.  B is a
**                        boolean: "yes", "no", "true", "false", etc.
**   -l|--log             select log mode (the default)
**   -n|--limit N         display the first N changes. N=0 means no limit.

**   --offset P           skip P changes
**   -p|--print           select print mode
**   -r|--revision R      print the given revision (or ckout, if none is given)
**                        to stdout (only in print mode)
**   -s|--status          select status mode (print a status indicator for FILE)
**   -W|--width <num>     Width of lines (default is to auto-detect). Must be
**                        >22 or 0 (= no limit, resulting in a single line per







|
>







41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
** to stdout.  The -p mode is another form of the "cat" command.
**
** Options:
**   -b|--brief           display a brief (one line / revision) summary
**   --case-sensitive B   Enable or disable case-sensitive filenames.  B is a
**                        boolean: "yes", "no", "true", "false", etc.
**   -l|--log             select log mode (the default)
**   -n|--limit N         Display the first N changes (default unlimited).
**                        N<=0 means no limit.
**   --offset P           skip P changes
**   -p|--print           select print mode
**   -r|--revision R      print the given revision (or ckout, if none is given)
**                        to stdout (only in print mode)
**   -s|--status          select status mode (print a status indicator for FILE)
**   -W|--width <num>     Width of lines (default is to auto-detect). Must be
**                        >22 or 0 (= no limit, resulting in a single line per
153
154
155
156
157
158
159



160
161
162
163
164
165
166
    }
    zLimit = find_option("limit","n",1);
    zWidth = find_option("width","W",1);
    iLimit = zLimit ? atoi(zLimit) : -1;
    zOffset = find_option("offset",0,1);
    iOffset = zOffset ? atoi(zOffset) : 0;
    iBrief = (find_option("brief","b",0) == 0);



    if( zWidth ){
      iWidth = atoi(zWidth);
      if( (iWidth!=0) && (iWidth<=22) ){
        fossil_fatal("-W|--width value must be >22 or 0");
      }
    }else{
      iWidth = -1;







>
>
>







154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
    }
    zLimit = find_option("limit","n",1);
    zWidth = find_option("width","W",1);
    iLimit = zLimit ? atoi(zLimit) : -1;
    zOffset = find_option("offset",0,1);
    iOffset = zOffset ? atoi(zOffset) : 0;
    iBrief = (find_option("brief","b",0) == 0);
    if( iLimit==0 ){
      iLimit = -1;
    }
    if( zWidth ){
      iWidth = atoi(zWidth);
      if( (iWidth!=0) && (iWidth<=22) ){
        fossil_fatal("-W|--width value must be >22 or 0");
      }
    }else{
      iWidth = -1;
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
    }
    gidx = graph_add_row(pGraph, frid>0 ? frid : fpid+1000000000,
                         nParent, aParent, zBr, zBgClr,
                         zUuid, 0);
    if( strncmp(zDate, zPrevDate, 10) ){
      sqlite3_snprintf(sizeof(zPrevDate), zPrevDate, "%.10s", zDate);
      @ <tr><td>
      @   <div class="divider">%s(zPrevDate)</div>
      @ </td><td></td><td></td></tr>
    }
    memcpy(zTime, &zDate[11], 5);
    zTime[5] = 0;
    @ <tr><td class="timelineTime">
    @ %z(href("%R/timeline?c=%t",zDate))%s(zTime)</a></td>
    @ <td class="timelineGraph"><div id="m%d(gidx)"></div></td>







|







449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
    }
    gidx = graph_add_row(pGraph, frid>0 ? frid : fpid+1000000000,
                         nParent, aParent, zBr, zBgClr,
                         zUuid, 0);
    if( strncmp(zDate, zPrevDate, 10) ){
      sqlite3_snprintf(sizeof(zPrevDate), zPrevDate, "%.10s", zDate);
      @ <tr><td>
      @   <div class="divider timelineDate">%s(zPrevDate)</div>
      @ </td><td></td><td></td></tr>
    }
    memcpy(zTime, &zDate[11], 5);
    zTime[5] = 0;
    @ <tr><td class="timelineTime">
    @ %z(href("%R/timeline?c=%t",zDate))%s(zTime)</a></td>
    @ <td class="timelineGraph"><div id="m%d(gidx)"></div></td>
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
    @ branch: %z(href("%R/timeline?t=%T&n=200",zBr))%h(zBr)</a>)
    if( g.perm.Hyperlink && zUuid ){
      const char *z = zFilename;
      @ %z(href("%R/annotate?filename=%h&checkin=%s",z,zCkin))
      @ [annotate]</a>
      @ %z(href("%R/blame?filename=%h&checkin=%s",z,zCkin))
      @ [blame]</a>
      @ %z(href("%R/timeline?n=200&uf=%!S",zUuid))[checkins&nbsp;using]</a>
      if( fpid ){
        @ %z(href("%R/fdiff?sbs=1&v1=%!S&v2=%!S",zPUuid,zUuid))[diff]</a>
      }
    }
    if( fDebug & FINFO_DEBUG_MLINK ){
      int ii;
      char *zAncLink;







|







505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
    @ branch: %z(href("%R/timeline?t=%T&n=200",zBr))%h(zBr)</a>)
    if( g.perm.Hyperlink && zUuid ){
      const char *z = zFilename;
      @ %z(href("%R/annotate?filename=%h&checkin=%s",z,zCkin))
      @ [annotate]</a>
      @ %z(href("%R/blame?filename=%h&checkin=%s",z,zCkin))
      @ [blame]</a>
      @ %z(href("%R/timeline?n=200&uf=%!S",zUuid))[check-ins&nbsp;using]</a>
      if( fpid ){
        @ %z(href("%R/fdiff?sbs=1&v1=%!S&v2=%!S",zPUuid,zUuid))[diff]</a>
      }
    }
    if( fDebug & FINFO_DEBUG_MLINK ){
      int ii;
      char *zAncLink;

Changes to src/foci.c.

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
** Author contact information:
**   drh@hwaci.com
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This routine implements an SQLite virtual table that gives all of the
** files associated with a single checkin.
**
** The filename "foci" is short for "Files Of CheckIn".
**
** Usage example:
**
**    CREATE VIRTUAL TABLE temp.foci USING files_of_checkin;
**                      -- ^^^^--- important!
**    SELECT * FROM foci WHERE checkinID=symbolic_name_to_rid('trunk');
**
** The symbolic_name_to_rid('trunk') function finds the BLOB.RID value 
** corresponding to the 'trunk' tag.  Then the files_of_checkin virtual table
** decodes the manifest defined by that BLOB and returns all files described
** by that manifest.  The "schema" for the temp.foci table is:
**
**     CREATE TABLE files_of_checkin(
**       checkinID    INTEGER,    -- RID for the checkin manifest
**       filename     TEXT,       -- Name of a file
**       uuid         TEXT,       -- SHA1 hash of the file
**       previousName TEXT,       -- Name of the file in previous checkin
**       perm         TEXT        -- Permissions on the file
**     );
**
*/
#include "config.h"
#include "foci.h"
#include <assert.h>

/*
** The schema for the virtual table:
*/
static const char zFociSchema[] =
@ CREATE TABLE files_of_checkin(
@  checkinID    INTEGER,    -- RID for the checkin manifest
@  filename     TEXT,       -- Name of a file
@  uuid         TEXT,       -- SHA1 hash of the file
@  previousName TEXT,       -- Name of the file in previous checkin
@  perm         TEXT        -- Permissions on the file
@ );
;

#if INTERFACE
/*
** The subclasses of sqlite3_vtab  and sqlite3_vtab_cursor tables







|

|







|





|


|













|


|







12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
** Author contact information:
**   drh@hwaci.com
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This routine implements an SQLite virtual table that gives all of the
** files associated with a single check-in.
**
** The filename "foci" is short for "Files of Check-in".
**
** Usage example:
**
**    CREATE VIRTUAL TABLE temp.foci USING files_of_checkin;
**                      -- ^^^^--- important!
**    SELECT * FROM foci WHERE checkinID=symbolic_name_to_rid('trunk');
**
** The symbolic_name_to_rid('trunk') function finds the BLOB.RID value
** corresponding to the 'trunk' tag.  Then the files_of_checkin virtual table
** decodes the manifest defined by that BLOB and returns all files described
** by that manifest.  The "schema" for the temp.foci table is:
**
**     CREATE TABLE files_of_checkin(
**       checkinID    INTEGER,    -- RID for the check-in manifest
**       filename     TEXT,       -- Name of a file
**       uuid         TEXT,       -- SHA1 hash of the file
**       previousName TEXT,       -- Name of the file in previous check-in
**       perm         TEXT        -- Permissions on the file
**     );
**
*/
#include "config.h"
#include "foci.h"
#include <assert.h>

/*
** The schema for the virtual table:
*/
static const char zFociSchema[] =
@ CREATE TABLE files_of_checkin(
@  checkinID    INTEGER,    -- RID for the check-in manifest
@  filename     TEXT,       -- Name of a file
@  uuid         TEXT,       -- SHA1 hash of the file
@  previousName TEXT,       -- Name of the file in previous check-in
@  perm         TEXT        -- Permissions on the file
@ );
;

#if INTERFACE
/*
** The subclasses of sqlite3_vtab  and sqlite3_vtab_cursor tables

Changes to src/fusefs.c.

293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
**
** This command uses the Fuse Filesystem to mount a directory at
** DIRECTORY that contains the content of all check-ins in the
** repository.  The names of files are DIRECTORY/checkins/VERSION/PATH
** where DIRECTORY is the root of the mount, VERSION is any valid
** check-in name (examples: "trunk" or "tip" or a tag or any unique
** prefix of a SHA1 hash, etc) and PATH is the pathname of the file
** in the checkin.  If DIRECTORY does not exist, then an attempt is
** made to create it.
**
** The DIRECTORY/checkins directory is not searchable so one cannot
** do "ls DIRECTORY/checkins" to get a listing of all possible checkin
** names.  There are countless variations on checkin names and it is
** impractical to list them all.  But all other directories are searchable
** and so the "ls" command will work everywhere else in the fusefs
** file hierarchy.
**
** The FuseFS typically only works on Linux, and then only on Linux
** systems that have the right kernel drivers and have install the
** appropriate support libraries.







|



|
|







293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
**
** This command uses the Fuse Filesystem to mount a directory at
** DIRECTORY that contains the content of all check-ins in the
** repository.  The names of files are DIRECTORY/checkins/VERSION/PATH
** where DIRECTORY is the root of the mount, VERSION is any valid
** check-in name (examples: "trunk" or "tip" or a tag or any unique
** prefix of a SHA1 hash, etc) and PATH is the pathname of the file
** in the check-in.  If DIRECTORY does not exist, then an attempt is
** made to create it.
**
** The DIRECTORY/checkins directory is not searchable so one cannot
** do "ls DIRECTORY/checkins" to get a listing of all possible check-in
** names.  There are countless variations on check-in names and it is
** impractical to list them all.  But all other directories are searchable
** and so the "ls" command will work everywhere else in the fusefs
** file hierarchy.
**
** The FuseFS typically only works on Linux, and then only on Linux
** systems that have the right kernel drivers and have install the
** appropriate support libraries.

Changes to src/graph.c.

361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
  }
  p->mxRail = -1;

  /* Purge merge-parents that are out-of-graph if descenders are not
  ** drawn.
  **
  ** Each node has one primary parent and zero or more "merge" parents.
  ** A merge parent is a prior checkin from which changes were merged into
  ** the current check-in.  If a merge parent is not in the visible section
  ** of this graph, then no arrows will be drawn for it, so remove it from
  ** the aParent[] array.
  */
  if( omitDescenders ){
    for(pRow=p->pFirst; pRow; pRow=pRow->pNext){
      for(i=1; i<pRow->nParent; i++){







|







361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
  }
  p->mxRail = -1;

  /* Purge merge-parents that are out-of-graph if descenders are not
  ** drawn.
  **
  ** Each node has one primary parent and zero or more "merge" parents.
  ** A merge parent is a prior check-in from which changes were merged into
  ** the current check-in.  If a merge parent is not in the visible section
  ** of this graph, then no arrows will be drawn for it, so remove it from
  ** the aParent[] array.
  */
  if( omitDescenders ){
    for(pRow=p->pFirst; pRow; pRow=pRow->pNext){
      for(i=1; i<pRow->nParent; i++){

Changes to src/import.c.

1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
    ** The XMARK table provides a mapping from fast-import "marks" and symbols
    ** into artifact ids (UUIDs - the 40-byte hex SHA1 hash of artifacts).
    ** Given any valid fast-import symbol, the corresponding fossil rid and
    ** uuid can found by searching against the xmark.tname field.
    **
    ** The XBRANCH table maps commit marks and symbols into the branch those
    ** commits belong to.  If xbranch.tname is a fast-import symbol for a
    ** checkin then xbranch.brnm is the branch that checkin is part of.
    **
    ** The XTAG table records information about tags that need to be applied
    ** to various branches after the import finishes.  The xtag.tcontent field
    ** contains the text of an artifact that will add a tag to a check-in.
    ** The git-fast-export file format might specify the same tag multiple
    ** times but only the last tag should be used.  And we do not know which
    ** occurrence of the tag is the last until the import finishes.







|







1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
    ** The XMARK table provides a mapping from fast-import "marks" and symbols
    ** into artifact ids (UUIDs - the 40-byte hex SHA1 hash of artifacts).
    ** Given any valid fast-import symbol, the corresponding fossil rid and
    ** uuid can found by searching against the xmark.tname field.
    **
    ** The XBRANCH table maps commit marks and symbols into the branch those
    ** commits belong to.  If xbranch.tname is a fast-import symbol for a
    ** check-in then xbranch.brnm is the branch that check-in is part of.
    **
    ** The XTAG table records information about tags that need to be applied
    ** to various branches after the import finishes.  The xtag.tcontent field
    ** contains the text of an artifact that will add a tag to a check-in.
    ** The git-fast-export file format might specify the same tag multiple
    ** times but only the last tag should be used.  And we do not know which
    ** occurrence of the tag is the last until the import finishes.

Changes to src/info.c.

225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
      fossil_print("config-db:    %s\n", g.zConfigDbName);
    }
    fossil_print("project-code: %s\n", db_get("project-code", ""));
    vid = g.localOpen ? db_lget_int("checkout", 0) : 0;
    if( vid ){
      show_common_info(vid, "checkout:", 1, 1);
    }
    fossil_print("checkins:     %d\n",
                 db_int(-1, "SELECT count(*) FROM event WHERE type='ci' /*scan*/"));
  }else{
    int rid;
    rid = name_to_rid(g.argv[2]);
    if( rid==0 ){
      fossil_fatal("no such object: %s\n", g.argv[2]);
    }







|







225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
      fossil_print("config-db:    %s\n", g.zConfigDbName);
    }
    fossil_print("project-code: %s\n", db_get("project-code", ""));
    vid = g.localOpen ? db_lget_int("checkout", 0) : 0;
    if( vid ){
      show_common_info(vid, "checkout:", 1, 1);
    }
    fossil_print("check-ins:    %d\n",
                 db_int(-1, "SELECT count(*) FROM event WHERE type='ci' /*scan*/"));
  }else{
    int rid;
    rid = name_to_rid(g.argv[2]);
    if( rid==0 ){
      fossil_fatal("no such object: %s\n", g.argv[2]);
    }
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
void ci_page(void){
  Stmt q1, q2, q3;
  int rid;
  int isLeaf;
  int verboseFlag;     /* True to show diffs */
  int sideBySide;      /* True for side-by-side diffs */
  u64 diffFlags;       /* Flag parameter for text_diff() */
  const char *zName;   /* Name of the checkin to be displayed */
  const char *zUuid;   /* UUID of zName */
  const char *zParent; /* UUID of the parent checkin (if any) */
  const char *zRe;     /* regex parameter */
  ReCompiled *pRe = 0; /* regex */
  const char *zW;      /* URL param for ignoring whitespace */
  const char *zPage = "vinfo";  /* Page that shows diffs */
  const char *zPageHide = "ci"; /* Page that hides diffs */

  login_check_credentials();







|

|







515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
void ci_page(void){
  Stmt q1, q2, q3;
  int rid;
  int isLeaf;
  int verboseFlag;     /* True to show diffs */
  int sideBySide;      /* True for side-by-side diffs */
  u64 diffFlags;       /* Flag parameter for text_diff() */
  const char *zName;   /* Name of the check-in to be displayed */
  const char *zUuid;   /* UUID of zName */
  const char *zParent; /* UUID of the parent check-in (if any) */
  const char *zRe;     /* regex parameter */
  ReCompiled *pRe = 0; /* regex */
  const char *zW;      /* URL param for ignoring whitespace */
  const char *zPage = "vinfo";  /* Page that shows diffs */
  const char *zPageHide = "ci"; /* Page that hides diffs */

  login_check_credentials();
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
      @ <tr><th>Original&nbsp;User:</th><td>
      hyperlink_to_user(zUser,zDate,"</td></tr>");
    }else{
      @ <tr><th>User:</th><td>
      hyperlink_to_user(zUser,zDate,"</td></tr>");
    }
    if( zEComment ){
      @ <tr><th>Edited&nbsp;Comment:</th><td class="infoComment">%!w(zEComment)</td></tr>
      @ <tr><th>Original&nbsp;Comment:</th><td class="infoComment">%!w(zComment)</td></tr>
    }else{
      @ <tr><th>Comment:</th><td class="infoComment">%!w(zComment)</td></tr>
    }
    if( g.perm.Admin ){
      db_prepare(&q2,
         "SELECT rcvfrom.ipaddr, user.login, datetime(rcvfrom.mtime)"
         "  FROM blob JOIN rcvfrom USING(rcvid) LEFT JOIN user USING(uid)"
         " WHERE blob.rid=%d",
         rid







|
|

|







596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
      @ <tr><th>Original&nbsp;User:</th><td>
      hyperlink_to_user(zUser,zDate,"</td></tr>");
    }else{
      @ <tr><th>User:</th><td>
      hyperlink_to_user(zUser,zDate,"</td></tr>");
    }
    if( zEComment ){
      @ <tr><th>Edited&nbsp;Comment:</th><td class="infoComment">%!W(zEComment)</td></tr>
      @ <tr><th>Original&nbsp;Comment:</th><td class="infoComment">%!W(zComment)</td></tr>
    }else{
      @ <tr><th>Comment:</th><td class="infoComment">%!W(zComment)</td></tr>
    }
    if( g.perm.Admin ){
      db_prepare(&q2,
         "SELECT rcvfrom.ipaddr, user.login, datetime(rcvfrom.mtime)"
         "  FROM blob JOIN rcvfrom USING(rcvid) LEFT JOIN user USING(uid)"
         " WHERE blob.rid=%d",
         rid
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
  style_header("URL Error");
  @ <h1>Error</h1>
  @ <p>%h(z)</p>
  style_footer();
}

/*
** Find an checkin based on query parameter zParam and parse its
** manifest.  Return the number of errors.
*/
static Manifest *vdiff_parse_manifest(const char *zParam, int *pRid){
  int rid;

  *pRid = rid = name_to_rid_www(zParam);
  if( rid==0 ){
    const char *z = P(zParam);
    if( z==0 || z[0]==0 ){
      webpage_error("Missing \"%s\" query parameter.", zParam);
    }else{
      webpage_error("No such artifact: \"%s\"", z);
    }
    return 0;
  }
  if( !is_a_version(rid) ){
    webpage_error("Artifact %s is not a checkin.", P(zParam));
    return 0;
  }
  return manifest_get(rid, CFTYPE_MANIFEST, 0);
}

/*
** Output a description of a check-in







|
















|







877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
  style_header("URL Error");
  @ <h1>Error</h1>
  @ <p>%h(z)</p>
  style_footer();
}

/*
** Find an check-in based on query parameter zParam and parse its
** manifest.  Return the number of errors.
*/
static Manifest *vdiff_parse_manifest(const char *zParam, int *pRid){
  int rid;

  *pRid = rid = name_to_rid_www(zParam);
  if( rid==0 ){
    const char *z = P(zParam);
    if( z==0 || z[0]==0 ){
      webpage_error("Missing \"%s\" query parameter.", zParam);
    }else{
      webpage_error("No such artifact: \"%s\"", z);
    }
    return 0;
  }
  if( !is_a_version(rid) ){
    webpage_error("Artifact %s is not a check-in.", P(zParam));
    return 0;
  }
  return manifest_get(rid, CFTYPE_MANIFEST, 0);
}

/*
** Output a description of a check-in
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
**   to=TAG          Right side of the comparison
**   branch=TAG      Show all changes on a particular branch
**   v=BOOLEAN       Default true.  If false, only list files that have changed
**   sbs=BOOLEAN     Side-by-side diff if true.  Unified diff if false
**   glob=STRING     only diff files matching this glob
**
**
** Show all differences between two checkins.
*/
void vdiff_page(void){
  int ridFrom, ridTo;
  int verboseFlag = 0;
  int sideBySide = 0;
  u64 diffFlags = 0;
  Manifest *pFrom, *pTo;







|







974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
**   to=TAG          Right side of the comparison
**   branch=TAG      Show all changes on a particular branch
**   v=BOOLEAN       Default true.  If false, only list files that have changed
**   sbs=BOOLEAN     Side-by-side diff if true.  Unified diff if false
**   glob=STRING     only diff files matching this glob
**
**
** Show all differences between two check-ins.
*/
void vdiff_page(void){
  int ridFrom, ridTo;
  int verboseFlag = 0;
  int sideBySide = 0;
  u64 diffFlags = 0;
  Manifest *pFrom, *pTo;
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
        @ <ul>
      }
      prevName = fossil_strdup(zName);
    }
    if( showDetail ){
      @ <li>
      hyperlink_to_date(zDate,"");
      @ &mdash; part of checkin
      hyperlink_to_uuid(zVers);
    }else{
      @ &mdash; part of checkin
      hyperlink_to_uuid(zVers);
      @ at
      hyperlink_to_date(zDate,"");
    }
    if( zBr && zBr[0] ){
      @ on branch %z(href("%R/timeline?r=%T",zBr))%h(zBr)</a>
    }
    @ &mdash; %!w(zCom) (user:
    hyperlink_to_user(zUser,zDate,")");
    if( g.perm.Hyperlink ){
      @ %z(href("%R/finfo?name=%T&ci=%!S",zName,zVers))[ancestry]</a>
      @ %z(href("%R/annotate?filename=%T&checkin=%!S",zName,zVers))
      @ [annotate]</a>
      @ %z(href("%R/blame?filename=%T&checkin=%!S",zName,zVers))
      @ [blame]</a>







|


|







|







1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
        @ <ul>
      }
      prevName = fossil_strdup(zName);
    }
    if( showDetail ){
      @ <li>
      hyperlink_to_date(zDate,"");
      @ &mdash; part of check-in
      hyperlink_to_uuid(zVers);
    }else{
      @ &mdash; part of check-in
      hyperlink_to_uuid(zVers);
      @ at
      hyperlink_to_date(zDate,"");
    }
    if( zBr && zBr[0] ){
      @ on branch %z(href("%R/timeline?r=%T",zBr))%h(zBr)</a>
    }
    @ &mdash; %!W(zCom) (user:
    hyperlink_to_user(zUser,zDate,")");
    if( g.perm.Hyperlink ){
      @ %z(href("%R/finfo?name=%T&ci=%!S",zName,zVers))[ancestry]</a>
      @ %z(href("%R/annotate?filename=%T&checkin=%!S",zName,zVers))
      @ [annotate]</a>
      @ %z(href("%R/blame?filename=%T&checkin=%!S",zName,zVers))
      @ [blame]</a>
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
        hyperlink_to_event_tagid(db_column_int(&q, 5));
      }else{
        @ Control file referencing
      }
      if( zType[0]!='e' ){
        hyperlink_to_uuid(zUuid);
      }
      @ - %!w(zCom) by
      hyperlink_to_user(zUser,zDate," on");
      hyperlink_to_date(zDate, ".");
      if( pDownloadName && blob_size(pDownloadName)==0 ){
        blob_appendf(pDownloadName, "%S.txt", zUuid);
      }
      tag_private_status(rid);
      cnt++;







|







1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
        hyperlink_to_event_tagid(db_column_int(&q, 5));
      }else{
        @ Control file referencing
      }
      if( zType[0]!='e' ){
        hyperlink_to_uuid(zUuid);
      }
      @ - %!W(zCom) by
      hyperlink_to_user(zUser,zDate," on");
      hyperlink_to_date(zDate, ".");
      if( pDownloadName && blob_size(pDownloadName)==0 ){
        blob_appendf(pDownloadName, "%S.txt", zUuid);
      }
      tag_private_status(rid);
      cnt++;
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
  @ <blockquote><pre>
  hexdump(&content);
  @ </pre></blockquote>
  style_footer();
}

/*
** Attempt to lookup the specified checkin and file name into an rid.
*/
int artifact_from_ci_and_filename(
  const char *zCI,
  const char *zFilename
){
  int cirid;
  Manifest *pManifest;







|







1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
  @ <blockquote><pre>
  hexdump(&content);
  @ </pre></blockquote>
  style_footer();
}

/*
** Attempt to lookup the specified check-in and file name into an rid.
*/
int artifact_from_ci_and_filename(
  const char *zCI,
  const char *zFilename
){
  int cirid;
  Manifest *pManifest;
1699
1700
1701
1702
1703
1704
1705


1706
1707
1708
1709
1710
1711
1712
1713
1714

1715
1716


1717

1718
1719
1720
1721
1722

1723

1724










1725
1726
1727

1728
1729
1730








1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768

1769
1770
1771
1772
1773
1774
1775
/*
** The "z" argument is a string that contains the text of a source code
** file.  This routine appends that text to the HTTP reply with line numbering.
**
** zLn is the ?ln= parameter for the HTTP query.  If there is an argument,
** then highlight that line number and scroll to it once the page loads.
** If there are two line numbers, highlight the range of lines.


*/
void output_text_with_line_numbers(
  const char *z,
  const char *zLn
){
  int iStart, iEnd;    /* Start and end of region to highlight */
  int n = 0;           /* Current line number */
  int i;               /* Loop index */
  int iTop = 0;        /* Scroll so that this line is on top of screen. */


  iStart = iEnd = atoi(zLn);


  if( iStart>0 ){

    for(i=0; fossil_isdigit(zLn[i]); i++){}
    if( zLn[i]==',' || zLn[i]=='-' || zLn[i]=='.' ){
      i++;
      while( zLn[i]=='.' ){ i++; }
      iEnd = atoi(&zLn[i]);

    }

    if( iEnd<iStart ) iEnd = iStart;










    iTop = iStart - 15 + (iEnd-iStart)/4;
    if( iTop>iStart - 2 ) iTop = iStart-2;
  }

  @ <pre>
  while( z[0] ){
    n++;








    for(i=0; z[i] && z[i]!='\n'; i++){}
    if( n==iTop ) cgi_append_content("<span id=\"topln\">", -1);
    if( n==iStart ){
      cgi_append_content("<div class=\"selectedText\">",-1);
    }
    cgi_printf("%6d  ", n);
    if( i>0 ){
      char *zHtml = htmlize(z, i);
      cgi_append_content(zHtml, -1);
      fossil_free(zHtml);
    }
    if( n==iStart-15 ) cgi_append_content("</span>", -1);
    if( n==iEnd ) cgi_append_content("</div>", -1);
    else cgi_append_content("\n", 1);
    z += i;
    if( z[0]=='\n' ) z++;
  }
  if( n<iEnd ) cgi_printf("</div>");
  @ </pre>
  if( iStart ){
    @ <script>gebi('topln').scrollIntoView(true);</script>
  }
}


/*
** WEBPAGE: whatis
** WEBPAGE: artifact
**
** URL: /artifact/SHA1HASH
** URL: /artifact?ci=CHECKIN&filename=PATH
** URL: /whatis/SHA1HASH
**
** Additional query parameters:
**
**   ln              - show line numbers
**   ln=N            - highlight line number N
**   ln=M-N          - highlight lines M through N inclusive

**   verbose         - show more detail in the description
**
** The /artifact page show the complete content of a file
** identified by SHA1HASH as preformatted text.  The
** /whatis page shows only a description of the file.
*/
void artifact_page(void){







>
>







|

>


>
>

>
|
|
|
|
|
>
|
>
|
>
>
>
>
>
>
>
>
>
>



>



>
>
>
>
>
>
>
>











|







|


















>







1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
/*
** The "z" argument is a string that contains the text of a source code
** file.  This routine appends that text to the HTTP reply with line numbering.
**
** zLn is the ?ln= parameter for the HTTP query.  If there is an argument,
** then highlight that line number and scroll to it once the page loads.
** If there are two line numbers, highlight the range of lines.
** Multiple ranges can be highlighed by adding additional line numbers
** separated by a non-digit character (also not one of [-,.]).
*/
void output_text_with_line_numbers(
  const char *z,
  const char *zLn
){
  int iStart, iEnd;    /* Start and end of region to highlight */
  int n = 0;           /* Current line number */
  int i = 0;           /* Loop index */
  int iTop = 0;        /* Scroll so that this line is on top of screen. */
  Stmt q;

  iStart = iEnd = atoi(zLn);
  db_multi_exec(
    "CREATE TEMP TABLE lnos(iStart INTEGER PRIMARY KEY, iEnd INTEGER)");
  if( iStart>0 ){
    do{
      while( fossil_isdigit(zLn[i]) ) i++;
      if( zLn[i]==',' || zLn[i]=='-' || zLn[i]=='.' ){
        i++;
        while( zLn[i]=='.' ){ i++; }
        iEnd = atoi(&zLn[i]);
        while( fossil_isdigit(zLn[i]) ) i++;
      }
      while( fossil_isdigit(zLn[i]) ) i++;
      if( iEnd<iStart ) iEnd = iStart;
      db_multi_exec(
        "INSERT OR REPLACE INTO lnos VALUES(%d,%d)", iStart, iEnd
      );
      iStart = iEnd = atoi(&zLn[i++]);
    }while( zLn[i] && iStart && iEnd );
  }
  db_prepare(&q, "SELECT min(iStart), iEnd FROM lnos");
  if( db_step(&q)==SQLITE_ROW ){
    iStart = db_column_int(&q, 0);
    iEnd = db_column_int(&q, 1);
    iTop = iStart - 15 + (iEnd-iStart)/4;
    if( iTop>iStart - 2 ) iTop = iStart-2;
  }
  db_finalize(&q);
  @ <pre>
  while( z[0] ){
    n++;
    db_prepare(&q,
      "SELECT min(iStart), max(iEnd) FROM lnos"
      " WHERE iStart <= %d AND iEnd >= %d", n, n);
    if( db_step(&q)==SQLITE_ROW ){
      iStart = db_column_int(&q, 0);
      iEnd = db_column_int(&q, 1);
    }
    db_finalize(&q);
    for(i=0; z[i] && z[i]!='\n'; i++){}
    if( n==iTop ) cgi_append_content("<span id=\"topln\">", -1);
    if( n==iStart ){
      cgi_append_content("<div class=\"selectedText\">",-1);
    }
    cgi_printf("%6d  ", n);
    if( i>0 ){
      char *zHtml = htmlize(z, i);
      cgi_append_content(zHtml, -1);
      fossil_free(zHtml);
    }
    if( n==iTop ) cgi_append_content("</span>", -1);
    if( n==iEnd ) cgi_append_content("</div>", -1);
    else cgi_append_content("\n", 1);
    z += i;
    if( z[0]=='\n' ) z++;
  }
  if( n<iEnd ) cgi_printf("</div>");
  @ </pre>
  if( db_int(0, "SELECT EXISTS(SELECT 1 FROM lnos)") ){
    @ <script>gebi('topln').scrollIntoView(true);</script>
  }
}


/*
** WEBPAGE: whatis
** WEBPAGE: artifact
**
** URL: /artifact/SHA1HASH
** URL: /artifact?ci=CHECKIN&filename=PATH
** URL: /whatis/SHA1HASH
**
** Additional query parameters:
**
**   ln              - show line numbers
**   ln=N            - highlight line number N
**   ln=M-N          - highlight lines M through N inclusive
**   ln=M-N+Y-Z      - higllight lines M through N and Y through Z (inclusive)
**   verbose         - show more detail in the description
**
** The /artifact page show the complete content of a file
** identified by SHA1HASH as preformatted text.  The
** /whatis page shows only a description of the file.
*/
void artifact_page(void){
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
    @ <h2>Artifact %s(zUuid):</h2>
  }
  blob_zero(&downloadName);
  objType = object_description(rid, objdescFlags, &downloadName);
  style_submenu_element("Download", "Download",
          "%R/raw/%T?name=%s", blob_str(&downloadName), zUuid);
  if( db_exists("SELECT 1 FROM mlink WHERE fid=%d", rid) ){
    style_submenu_element("Checkins Using", "Checkins Using",
          "%R/timeline?n=200&uf=%s",zUuid);
  }
  asText = P("txt")!=0;
  zMime = mimetype_from_name(blob_str(&downloadName));
  if( zMime ){
    if( fossil_strcmp(zMime, "text/html")==0 ){
      if( asText ){







|







1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
    @ <h2>Artifact %s(zUuid):</h2>
  }
  blob_zero(&downloadName);
  objType = object_description(rid, objdescFlags, &downloadName);
  style_submenu_element("Download", "Download",
          "%R/raw/%T?name=%s", blob_str(&downloadName), zUuid);
  if( db_exists("SELECT 1 FROM mlink WHERE fid=%d", rid) ){
    style_submenu_element("Check-ins Using", "Check-ins Using",
          "%R/timeline?n=200&uf=%s",zUuid);
  }
  asText = P("txt")!=0;
  zMime = mimetype_from_name(blob_str(&downloadName));
  if( zMime ){
    if( fossil_strcmp(zMime, "text/html")==0 ){
      if( asText ){
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
    @ <blockquote>
    @ <table border=0>
    if( zNewColor && zNewColor[0] ){
      @ <tr><td style="background-color: %h(zNewColor);">
    }else{
      @ <tr><td>
    }
    @ %!w(blob_str(&comment))
    blob_zero(&suffix);
    blob_appendf(&suffix, "(user: %h", zNewUser);
    db_prepare(&q, "SELECT substr(tagname,5) FROM tagxref, tag"
                   " WHERE tagname GLOB 'sym-*' AND tagxref.rid=%d"
                   "   AND tagtype>1 AND tag.tagid=tagxref.tagid",
                   rid);
    while( db_step(&q)==SQLITE_ROW ){







|







2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
    @ <blockquote>
    @ <table border=0>
    if( zNewColor && zNewColor[0] ){
      @ <tr><td style="background-color: %h(zNewColor);">
    }else{
      @ <tr><td>
    }
    @ %!W(blob_str(&comment))
    blob_zero(&suffix);
    blob_appendf(&suffix, "(user: %h", zNewUser);
    db_prepare(&q, "SELECT substr(tagname,5) FROM tagxref, tag"
                   " WHERE tagname GLOB 'sym-*' AND tagxref.rid=%d"
                   "   AND tagtype>1 AND tag.tagid=tagxref.tagid",
                   rid);
    while( db_step(&q)==SQLITE_ROW ){

Changes to src/json_artifact.c.

77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
  }
  db_finalize(&q);
  return cson_array_value(pParents);
}

/*
** Generates an artifact Object for the given rid,
** which must refer to a Checkin.
**
** Returned value is NULL or an Object owned by the caller.
*/
cson_value * json_artifact_for_ci( int rid, char showFiles ){
  cson_value * v = NULL;
  Stmt q = empty_Stmt;
  static cson_value * eventTypeLabel = NULL;







|







77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
  }
  db_finalize(&q);
  return cson_array_value(pParents);
}

/*
** Generates an artifact Object for the given rid,
** which must refer to a Check-in.
**
** Returned value is NULL or an Object owned by the caller.
*/
cson_value * json_artifact_for_ci( int rid, char showFiles ){
  cson_value * v = NULL;
  Stmt q = empty_Stmt;
  static cson_value * eventTypeLabel = NULL;
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
  cson_object_set(pay, "user", json_new_string(pTktChng->zUser));
  cson_object_set(pay, "timestamp", json_julian_to_timestamp(pTktChng->rDate));
  manifest_destroy(pTktChng);
  return cson_object_value(pay);
}

/*
** Sub-impl of /json/artifact for checkins.
*/
static cson_value * json_artifact_ci( cson_object * zParent, int rid ){
  if(!g.perm.Read){
    json_set_err( FSL_JSON_E_DENIED, "Viewing checkins requires 'o' privileges." );
    return NULL;
  }else{
    cson_value * artV = json_artifact_for_ci(rid, 1);
    cson_object * art = cson_value_get_object(artV);
    if(art){
      cson_object_merge( zParent, art, CSON_MERGE_REPLACE );
      cson_free_object(art);







|



|







205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
  cson_object_set(pay, "user", json_new_string(pTktChng->zUser));
  cson_object_set(pay, "timestamp", json_julian_to_timestamp(pTktChng->rDate));
  manifest_destroy(pTktChng);
  return cson_object_value(pay);
}

/*
** Sub-impl of /json/artifact for check-ins.
*/
static cson_value * json_artifact_ci( cson_object * zParent, int rid ){
  if(!g.perm.Read){
    json_set_err( FSL_JSON_E_DENIED, "Viewing check-ins requires 'o' privileges." );
    return NULL;
  }else{
    cson_value * artV = json_artifact_for_ci(rid, 1);
    cson_object * art = cson_value_get_object(artV);
    if(art){
      cson_object_merge( zParent, art, CSON_MERGE_REPLACE );
      cson_free_object(art);
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
                       rid
                       );
  if(parentUuid){
    cson_object_set( zParent, "parent", json_new_string(parentUuid) );
    fossil_free(parentUuid);
  }
  
  /* Find checkins associated with this file... */
  db_prepare(&q,
      "SELECT filename.name AS name, "
      "  (mlink.pid==0) AS isNew,"
      "  (mlink.fid==0) AS isDel,"
      "  cast(strftime('%%s',event.mtime) as int) AS timestamp,"
      "  coalesce(event.ecomment,event.comment) as comment,"
      "  coalesce(event.euser,event.user) as user,"
#if 0
      "  a.size AS size," /* same for all checkins. */
#endif
      "  b.uuid as checkin, "
#if 0
      "  mlink.mperm as mperm,"
#endif
      "  coalesce((SELECT value FROM tagxref"
                      "  WHERE tagid=%d AND tagtype>0 AND "
                      " rid=mlink.mid),'trunk') as branch"
      "  FROM mlink, filename, event, blob a, blob b"
      " WHERE filename.fnid=mlink.fnid"
      "   AND event.objid=mlink.mid"
      "   AND a.rid=mlink.fid"
      "   AND b.rid=mlink.mid"
      "   AND mlink.fid=%d"
      "   ORDER BY filename.name, event.mtime",
      TAG_BRANCH, rid
    );
  /* TODO: add a "state" flag for the file in each checkin,
     e.g. "modified", "new", "deleted".
   */
  checkin_arr = cson_new_array(); 
  cson_object_set(pay, "checkins", cson_array_value(checkin_arr));
  while( (SQLITE_ROW==db_step(&q) ) ){
    cson_object * row = cson_value_get_object(cson_sqlite3_row_to_object(q.pStmt));
    /* FIXME: move this isNew/isDel stuff into an SQL CASE statement. */







|








|

















|







344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
                       rid
                       );
  if(parentUuid){
    cson_object_set( zParent, "parent", json_new_string(parentUuid) );
    fossil_free(parentUuid);
  }
  
  /* Find check-ins associated with this file... */
  db_prepare(&q,
      "SELECT filename.name AS name, "
      "  (mlink.pid==0) AS isNew,"
      "  (mlink.fid==0) AS isDel,"
      "  cast(strftime('%%s',event.mtime) as int) AS timestamp,"
      "  coalesce(event.ecomment,event.comment) as comment,"
      "  coalesce(event.euser,event.user) as user,"
#if 0
      "  a.size AS size," /* same for all check-ins. */
#endif
      "  b.uuid as checkin, "
#if 0
      "  mlink.mperm as mperm,"
#endif
      "  coalesce((SELECT value FROM tagxref"
                      "  WHERE tagid=%d AND tagtype>0 AND "
                      " rid=mlink.mid),'trunk') as branch"
      "  FROM mlink, filename, event, blob a, blob b"
      " WHERE filename.fnid=mlink.fnid"
      "   AND event.objid=mlink.mid"
      "   AND a.rid=mlink.fid"
      "   AND b.rid=mlink.mid"
      "   AND mlink.fid=%d"
      "   ORDER BY filename.name, event.mtime",
      TAG_BRANCH, rid
    );
  /* TODO: add a "state" flag for the file in each check-in,
     e.g. "modified", "new", "deleted".
   */
  checkin_arr = cson_new_array(); 
  cson_object_set(pay, "checkins", cson_array_value(checkin_arr));
  while( (SQLITE_ROW==db_step(&q) ) ){
    cson_object * row = cson_value_get_object(cson_sqlite3_row_to_object(q.pStmt));
    /* FIXME: move this isNew/isDel stuff into an SQL CASE statement. */

Changes to src/json_dir.c.

80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
  */
  if( zCI && *zCI ){
    pM = manifest_get_by_name(zCI, &rid);
    if( pM ){
      zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid);
    }else{
      json_set_err(FSL_JSON_E_UNRESOLVED_UUID,
                   "Checkin name [%s] is unresolved.",
                   zCI);
      return NULL;
    }
  }

  /* Jump through some hoops to find the directory name... */
  zDX = json_find_option_cstr("name",NULL,NULL);







|







80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
  */
  if( zCI && *zCI ){
    pM = manifest_get_by_name(zCI, &rid);
    if( pM ){
      zUuid = db_text(0, "SELECT uuid FROM blob WHERE rid=%d", rid);
    }else{
      json_set_err(FSL_JSON_E_UNRESOLVED_UUID,
                   "Check-in name [%s] is unresolved.",
                   zCI);
      return NULL;
    }
  }

  /* Jump through some hoops to find the directory name... */
  zDX = json_find_option_cstr("name",NULL,NULL);

Changes to src/json_finfo.c.

87
88
89
90
91
92
93
94
95
96
97
98
99
100
101

  if( zCheckin && *zCheckin ){
    char * zU = NULL;
    int rc = name_to_uuid2( zCheckin, "ci", &zU );
    /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/
    if(rc<=0){
      json_set_err((rc<0) ? FSL_JSON_E_AMBIGUOUS_UUID : FSL_JSON_E_RESOURCE_NOT_FOUND,
                   "Checkin UUID %s.", (rc<0) ? "is ambiguous" : "not found");
      blob_reset(&sql);
      return NULL;
    }
    blob_append_sql(&sql, " AND ci.uuid='%q'", zU);
    free(zU);
  }else{
    if( zAfter && *zAfter ){







|







87
88
89
90
91
92
93
94
95
96
97
98
99
100
101

  if( zCheckin && *zCheckin ){
    char * zU = NULL;
    int rc = name_to_uuid2( zCheckin, "ci", &zU );
    /*printf("zCheckin=[%s], zU=[%s]", zCheckin, zU);*/
    if(rc<=0){
      json_set_err((rc<0) ? FSL_JSON_E_AMBIGUOUS_UUID : FSL_JSON_E_RESOURCE_NOT_FOUND,
                   "Check-in UUID %s.", (rc<0) ? "is ambiguous" : "not found");
      blob_reset(&sql);
      return NULL;
    }
    blob_append_sql(&sql, " AND ci.uuid='%q'", zU);
    free(zU);
  }else{
    if( zAfter && *zAfter ){

Changes to src/json_tag.c.

351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
    }
  }
  payV = cson_value_new_object();
  pay = cson_value_get_object(payV);
  cson_object_set(pay, "raw", cson_value_new_bool(fRaw) );
  if( zCheckin ){
    /**
       Tags for a specific checkin. Output format:

       RAW mode:
    
       {
           "sym-tagname": (value || null),
           ...other tags...
       }

       Non-raw:

       {
          "tagname": (value || null),
          ...other tags...
       }
    */
    cson_value * objV = NULL;
    cson_object * obj = NULL;
    int const rid = name_to_rid(zCheckin);
    if(0==rid){
      json_set_err(FSL_JSON_E_UNRESOLVED_UUID,
                   "Could not find artifact for checkin [%s].",
                   zCheckin);
      goto error;
    }
    cson_object_set(pay, "checkin", json_new_string(zCheckin));
    db_prepare(&q,
               "SELECT tagname, value FROM tagxref, tag"
               " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid"







|




















|







351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
    }
  }
  payV = cson_value_new_object();
  pay = cson_value_get_object(payV);
  cson_object_set(pay, "raw", cson_value_new_bool(fRaw) );
  if( zCheckin ){
    /**
       Tags for a specific check-in. Output format:

       RAW mode:
    
       {
           "sym-tagname": (value || null),
           ...other tags...
       }

       Non-raw:

       {
          "tagname": (value || null),
          ...other tags...
       }
    */
    cson_value * objV = NULL;
    cson_object * obj = NULL;
    int const rid = name_to_rid(zCheckin);
    if(0==rid){
      json_set_err(FSL_JSON_E_UNRESOLVED_UUID,
                   "Could not find artifact for check-in [%s].",
                   zCheckin);
      goto error;
    }
    cson_object_set(pay, "checkin", json_new_string(zCheckin));
    db_prepare(&q,
               "SELECT tagname, value FROM tagxref, tag"
               " WHERE tagxref.rid=%d AND tagxref.tagid=tag.tagid"

Changes to src/json_timeline.c.

455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
  Stmt q = empty_Stmt;
  char warnRowToJsonFailed = 0;
  Blob sql = empty_blob;
  if( !g.perm.Hyperlink ){
    /* Reminder to self: HTML impl requires 'o' (Read)
       rights.
    */
    json_set_err( FSL_JSON_E_DENIED, "Checkin timeline requires 'h' access." );
    return NULL;
  }
  verboseFlag = json_find_option_bool("verbose",NULL,"v",0);
  if( !verboseFlag ){
    verboseFlag = json_find_option_bool("files",NULL,"f",0);
  }
  payV = cson_value_new_object();







|







455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
  Stmt q = empty_Stmt;
  char warnRowToJsonFailed = 0;
  Blob sql = empty_blob;
  if( !g.perm.Hyperlink ){
    /* Reminder to self: HTML impl requires 'o' (Read)
       rights.
    */
    json_set_err( FSL_JSON_E_DENIED, "Check-in timeline requires 'h' access." );
    return NULL;
  }
  verboseFlag = json_find_option_bool("verbose",NULL,"v",0);
  if( !verboseFlag ){
    verboseFlag = json_find_option_bool("files",NULL,"f",0);
  }
  payV = cson_value_new_object();

Changes to src/leaf.c.

14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This file contains code used to manage the "leaf" table of the
** repository.
**
** The LEAF table contains the rids for all leaves in the checkin DAG.
** A leaf is a checkin that has no children in the same branch.
*/
#include "config.h"
#include "leaf.h"
#include <assert.h>


/*







|
|







14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This file contains code used to manage the "leaf" table of the
** repository.
**
** The LEAF table contains the rids for all leaves in the check-in DAG.
** A leaf is a check-in that has no children in the same branch.
*/
#include "config.h"
#include "leaf.h"
#include <assert.h>


/*
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
         " == coalesce((SELECT value FROM tagxref"
                       " WHERE tagid=%d AND rid=plink.cid),'trunk')",
    TAG_BRANCH, TAG_BRANCH
  );
}

/*
** A bag of checkins whose leaf status needs to be checked.
*/
static Bag needToCheck;

/*
** Check to see if checkin "rid" is a leaf and either add it to the LEAF
** table if it is, or remove it if it is not.
*/
void leaf_check(int rid){
  static Stmt checkIfLeaf;
  static Stmt addLeaf;
  static Stmt removeLeaf;
  int rc;







|




|







94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
         " == coalesce((SELECT value FROM tagxref"
                       " WHERE tagid=%d AND rid=plink.cid),'trunk')",
    TAG_BRANCH, TAG_BRANCH
  );
}

/*
** A bag of check-ins whose leaf status needs to be checked.
*/
static Bag needToCheck;

/*
** Check to see if check-in "rid" is a leaf and either add it to the LEAF
** table if it is, or remove it if it is not.
*/
void leaf_check(int rid){
  static Stmt checkIfLeaf;
  static Stmt addLeaf;
  static Stmt removeLeaf;
  int rc;

Changes to src/main.c.

70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
*/
struct FossilUserPerms {
  char Setup;            /* s: use Setup screens on web interface */
  char Admin;            /* a: administrative permission */
  char Delete;           /* d: delete wiki or tickets */
  char Password;         /* p: change password */
  char Query;            /* q: create new reports */
  char Write;            /* i: xfer inbound. checkin */
  char Read;             /* o: xfer outbound. checkout */
  char Hyperlink;        /* h: enable the display of hyperlinks */
  char Clone;            /* g: clone */
  char RdWiki;           /* j: view wiki via web */
  char NewWiki;          /* f: create new wiki via web */
  char ApndWiki;         /* m: append to wiki via web */
  char WrWiki;           /* k: edit wiki via web */
  char ModWiki;          /* l: approve and publish wiki content (Moderator) */







|
|







70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
*/
struct FossilUserPerms {
  char Setup;            /* s: use Setup screens on web interface */
  char Admin;            /* a: administrative permission */
  char Delete;           /* d: delete wiki or tickets */
  char Password;         /* p: change password */
  char Query;            /* q: create new reports */
  char Write;            /* i: xfer inbound. check-in */
  char Read;             /* o: xfer outbound. check-out */
  char Hyperlink;        /* h: enable the display of hyperlinks */
  char Clone;            /* g: clone */
  char RdWiki;           /* j: view wiki via web */
  char NewWiki;          /* f: create new wiki via web */
  char ApndWiki;         /* m: append to wiki via web */
  char WrWiki;           /* k: edit wiki via web */
  char ModWiki;          /* l: approve and publish wiki content (Moderator) */
1032
1033
1034
1035
1036
1037
1038





1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
#endif
#if defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS)
    fossil_print("TCL_PRIVATE_STUBS\n");
#endif
#if defined(FOSSIL_ENABLE_JSON)
    fossil_print("JSON (API %s)\n", FOSSIL_JSON_API_VERSION);
#endif





  }
}


/*
** COMMAND: help
**
** Usage: %fossil help COMMAND
**    or: %fossil COMMAND -help
**
** Display information on how to use COMMAND.  To display a list of
** available commands one of:
**
**    %fossil help              Show common commands
**    %fossil help --a|-all     Show both common and auxiliary commands
**    %fossil help --t|-test    Show test commands only
**    %fossil help --x|-aux     Show auxiliary commands only
**    %fossil help --w|-www     Show list of WWW pages
*/
void help_cmd(void){
  int rc, idx, isPage = 0;
  const char *z;
  const char *zCmdOrPage;
  const char *zCmdOrPagePlural;
  if( g.argc<3 ){







>
>
>
>
>








|





|
|
|
|







1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
#endif
#if defined(FOSSIL_ENABLE_TCL_PRIVATE_STUBS)
    fossil_print("TCL_PRIVATE_STUBS\n");
#endif
#if defined(FOSSIL_ENABLE_JSON)
    fossil_print("JSON (API %s)\n", FOSSIL_JSON_API_VERSION);
#endif
#if defined(BROKEN_MINGW_CMDLINE)
    fossil_print("MBCS_COMMAND_LINE\n");
#else
    fossil_print("UNICODE_COMMAND_LINE\n");
#endif
  }
}


/*
** COMMAND: help
**
** Usage: %fossil help COMMAND
**    or: %fossil COMMAND --help
**
** Display information on how to use COMMAND.  To display a list of
** available commands one of:
**
**    %fossil help              Show common commands
**    %fossil help -a|--all     Show both common and auxiliary commands
**    %fossil help -t|--test    Show test commands only
**    %fossil help -x|--aux     Show auxiliary commands only
**    %fossil help -w|--www     Show list of WWW pages
*/
void help_cmd(void){
  int rc, idx, isPage = 0;
  const char *z;
  const char *zCmdOrPage;
  const char *zCmdOrPagePlural;
  if( g.argc<3 ){
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422



2423
2424
2425
2426
2427
2428
2429
    cgi_handle_scgi_request();
  }else{
    cgi_handle_http_request(0);
  }
  process_one_web_page(zNotFound, glob_create(zFileGlob), allowRepoList);
#else
  /* Win32 implementation */
  (void)allowRepoList;  /* Suppress warning */
  if( isUiCmd ){
    zBrowser = db_get("web-browser", "start");
    if( zIpAddr ){
      zBrowserCmd = mprintf("%s http://%s:%%d/ &", zBrowser, zIpAddr);
    }else{
      zBrowserCmd = mprintf("%s http://localhost:%%d/ &", zBrowser);
    }
    if( g.repositoryOpen ) flags |= HTTP_SERVER_HAD_REPOSITORY;
    if( g.localOpen ) flags |= HTTP_SERVER_HAD_CHECKOUT;
  }
  db_close(1);



  if( win32_http_service(iPort, zNotFound, zFileGlob, flags) ){
    win32_http_server(iPort, mxPort, zBrowserCmd,
                      zStopperFile, zNotFound, zFileGlob, zIpAddr, flags);
  }
#endif
}








<











>
>
>







2409
2410
2411
2412
2413
2414
2415

2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
    cgi_handle_scgi_request();
  }else{
    cgi_handle_http_request(0);
  }
  process_one_web_page(zNotFound, glob_create(zFileGlob), allowRepoList);
#else
  /* Win32 implementation */

  if( isUiCmd ){
    zBrowser = db_get("web-browser", "start");
    if( zIpAddr ){
      zBrowserCmd = mprintf("%s http://%s:%%d/ &", zBrowser, zIpAddr);
    }else{
      zBrowserCmd = mprintf("%s http://localhost:%%d/ &", zBrowser);
    }
    if( g.repositoryOpen ) flags |= HTTP_SERVER_HAD_REPOSITORY;
    if( g.localOpen ) flags |= HTTP_SERVER_HAD_CHECKOUT;
  }
  db_close(1);
  if( allowRepoList ){
    flags |= HTTP_SERVER_REPOLIST;
  }
  if( win32_http_service(iPort, zNotFound, zFileGlob, flags) ){
    win32_http_server(iPort, mxPort, zBrowserCmd,
                      zStopperFile, zNotFound, zFileGlob, zIpAddr, flags);
  }
#endif
}

Changes to src/main.mk.

154
155
156
157
158
159
160



161
162
163
164
165
166
167
  $(SRCDIR)/../skins/khaki/header.txt \
  $(SRCDIR)/../skins/plain_gray/css.txt \
  $(SRCDIR)/../skins/plain_gray/footer.txt \
  $(SRCDIR)/../skins/plain_gray/header.txt \
  $(SRCDIR)/../skins/rounded1/css.txt \
  $(SRCDIR)/../skins/rounded1/footer.txt \
  $(SRCDIR)/../skins/rounded1/header.txt \



  $(SRCDIR)/diff.tcl \
  $(SRCDIR)/markdown.md

TRANS_SRC = \
  $(OBJDIR)/add_.c \
  $(OBJDIR)/allrepo_.c \
  $(OBJDIR)/attach_.c \







>
>
>







154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
  $(SRCDIR)/../skins/khaki/header.txt \
  $(SRCDIR)/../skins/plain_gray/css.txt \
  $(SRCDIR)/../skins/plain_gray/footer.txt \
  $(SRCDIR)/../skins/plain_gray/header.txt \
  $(SRCDIR)/../skins/rounded1/css.txt \
  $(SRCDIR)/../skins/rounded1/footer.txt \
  $(SRCDIR)/../skins/rounded1/header.txt \
  $(SRCDIR)/../skins/xekri/css.txt \
  $(SRCDIR)/../skins/xekri/footer.txt \
  $(SRCDIR)/../skins/xekri/header.txt \
  $(SRCDIR)/diff.tcl \
  $(SRCDIR)/markdown.md

TRANS_SRC = \
  $(OBJDIR)/add_.c \
  $(OBJDIR)/allrepo_.c \
  $(OBJDIR)/attach_.c \

Changes to src/makemake.tcl.

988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002

APPTARGETS += $(LIBTARGETS)

ifdef FOSSIL_BUILD_SSL
APPTARGETS += openssl
endif

$(APPNAME):	$(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(APPTARGETS)
	$(CODECHECK1) $(TRANS_SRC)
	$(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/fossil.o

# This rule prevents make from using its default rules to try build
# an executable named "manifest" out of the file named "manifest.c"
#
$(SRCDIR)/../manifest:







|







988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002

APPTARGETS += $(LIBTARGETS)

ifdef FOSSIL_BUILD_SSL
APPTARGETS += openssl
endif

$(APPNAME):	$(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o
	$(CODECHECK1) $(TRANS_SRC)
	$(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/fossil.o

# This rule prevents make from using its default rules to try build
# an executable named "manifest" out of the file named "manifest.c"
#
$(SRCDIR)/../manifest:
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
page_index.h: mkindex$E $(SRC)
	$** > $@

builtin_data.h:	mkbuiltin$E $(EXTRA_FILES)
	mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@

clean:
	-del $(OX)\*.obj
	-del *.obj
	-del *_.c
	-del *.h
	-del *.ilk
	-del *.map
	-del *.res
	-del headers
	-del linkopts
	-del vc*.pdb

realclean: clean
	-del $(APPNAME)
	-del $(PDBNAME)
	-del translate$E
	-del translate$P
	-del mkindex$E
	-del mkindex$P
	-del makeheaders$E
	-del makeheaders$P
	-del mkversion$E
	-del mkversion$P
	-del codecheck1$E
	-del codecheck1$P
	-del mkbuiltin$E
	-del mkbuiltin$P

$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h







|
|
|
|
|
|
|
|
|
|


|
|
|
|
|
|
|
|
|
|
|
|
|
|







1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
page_index.h: mkindex$E $(SRC)
	$** > $@

builtin_data.h:	mkbuiltin$E $(EXTRA_FILES)
	mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@

clean:
	del $(OX)\*.obj 2>NUL
	del *.obj 2>NUL
	del *_.c 2>NUL
	del *.h 2>NUL
	del *.ilk 2>NUL
	del *.map 2>NUL
	del *.res 2>NUL
	del headers 2>NUL
	del linkopts 2>NUL
	del vc*.pdb 2>NUL

realclean: clean
	del $(APPNAME) 2>NUL
	del $(PDBNAME) 2>NUL
	del translate$E 2>NUL
	del translate$P 2>NUL
	del mkindex$E 2>NUL
	del mkindex$P 2>NUL
	del makeheaders$E 2>NUL
	del makeheaders$P 2>NUL
	del mkversion$E 2>NUL
	del mkversion$P 2>NUL
	del codecheck1$E 2>NUL
	del codecheck1$P 2>NUL
	del mkbuiltin$E 2>NUL
	del mkbuiltin$P 2>NUL

$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h

Changes to src/manifest.c.

676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712

      /*
      **     P <uuid> ...
      **
      ** Specify one or more other artifacts which are the parents of
      ** this artifact.  The first parent is the primary parent.  All
      ** others are parents by merge. Note that the initial empty
      ** checkin historically has an empty P-card, so empty P-cards
      ** must be accepted.
      */
      case 'P': {
        while( (zUuid = next_token(&x, &sz))!=0 ){
          if( sz!=UUID_SIZE ) SYNTAX("wrong size UUID on P-card");
          if( !validate16(zUuid, UUID_SIZE) )SYNTAX("invalid UUID on P-card");
          if( p->nParent>=p->nParentAlloc ){
            p->nParentAlloc = p->nParentAlloc*2 + 5;
            p->azParent = fossil_realloc(p->azParent,
                               p->nParentAlloc*sizeof(char*));
          }
          i = p->nParent++;
          p->azParent[i] = zUuid;
        }
        break;
      }

      /*
      **     Q (+|-)<uuid> ?<uuid>?
      **
      ** Specify one or a range of checkins that are cherrypicked into
      ** this checkin ("+") or backed out of this checkin ("-").
      */
      case 'Q': {
        if( (zUuid=next_token(&x, &sz))==0 ) SYNTAX("missing UUID on Q-card");
        if( sz!=UUID_SIZE+1 ) SYNTAX("wrong size UUID on Q-card");
        if( zUuid[0]!='+' && zUuid[0]!='-' ){
          SYNTAX("Q-card does not begin with '+' or '-'");
        }







|




















|
|







676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712

      /*
      **     P <uuid> ...
      **
      ** Specify one or more other artifacts which are the parents of
      ** this artifact.  The first parent is the primary parent.  All
      ** others are parents by merge. Note that the initial empty
      ** check-in historically has an empty P-card, so empty P-cards
      ** must be accepted.
      */
      case 'P': {
        while( (zUuid = next_token(&x, &sz))!=0 ){
          if( sz!=UUID_SIZE ) SYNTAX("wrong size UUID on P-card");
          if( !validate16(zUuid, UUID_SIZE) )SYNTAX("invalid UUID on P-card");
          if( p->nParent>=p->nParentAlloc ){
            p->nParentAlloc = p->nParentAlloc*2 + 5;
            p->azParent = fossil_realloc(p->azParent,
                               p->nParentAlloc*sizeof(char*));
          }
          i = p->nParent++;
          p->azParent[i] = zUuid;
        }
        break;
      }

      /*
      **     Q (+|-)<uuid> ?<uuid>?
      **
      ** Specify one or a range of check-ins that are cherrypicked into
      ** this check-in ("+") or backed out of this check-in ("-").
      */
      case 'Q': {
        if( (zUuid=next_token(&x, &sz))==0 ) SYNTAX("missing UUID on Q-card");
        if( sz!=UUID_SIZE+1 ) SYNTAX("wrong size UUID on Q-card");
        if( zUuid[0]!='+' && zUuid[0]!='-' ){
          SYNTAX("Q-card does not begin with '+' or '-'");
        }
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
    manifest_destroy(p);
    p = 0;
  }
  return p;
}

/*
** Given a checkin name, load and parse the manifest for that checkin.
** Throw a fatal error if anything goes wrong.
*/
Manifest *manifest_get_by_name(const char *zName, int *pRid){
  int rid;
  Manifest *p;

  rid = name_to_typed_rid(zName, "ci");
  if( !is_a_version(rid) ){
    fossil_fatal("no such checkin: %s", zName);
  }
  if( pRid ) *pRid = rid;
  p = manifest_get(rid, CFTYPE_MANIFEST, 0);
  if( p==0 ){
    fossil_fatal("cannot parse manifest for checkin: %s", zName);
  }
  return p;
}

/*
** COMMAND: test-parse-manifest
**







|








|




|







993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
    manifest_destroy(p);
    p = 0;
  }
  return p;
}

/*
** Given a check-in name, load and parse the manifest for that check-in.
** Throw a fatal error if anything goes wrong.
*/
Manifest *manifest_get_by_name(const char *zName, int *pRid){
  int rid;
  Manifest *p;

  rid = name_to_typed_rid(zName, "ci");
  if( !is_a_version(rid) ){
    fossil_fatal("no such check-in: %s", zName);
  }
  if( pRid ) *pRid = rid;
  p = manifest_get(rid, CFTYPE_MANIFEST, 0);
  if( p==0 ){
    fossil_fatal("cannot parse manifest for check-in: %s", zName);
  }
  return p;
}

/*
** COMMAND: test-parse-manifest
**
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
  db_bind_int(&eq, ":mid", mid);
  db_bind_int(&eq, ":pmid", pmid);
  rc = db_step(&eq);
  db_reset(&eq);
  if( rc==SQLITE_ROW ) return;

  /* Compute the value of the missing pParent or pChild parameter.
  ** Fetch the baseline checkins for both.
  */
  assert( pParent==0 || pChild==0 );
  if( pParent==0 ){
    ppOther = &pParent;
    otherRid = pmid;
  }else{
    ppOther = &pChild;







|







1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
  db_bind_int(&eq, ":mid", mid);
  db_bind_int(&eq, ":pmid", pmid);
  rc = db_step(&eq);
  db_reset(&eq);
  if( rc==SQLITE_ROW ) return;

  /* Compute the value of the missing pParent or pChild parameter.
  ** Fetch the baseline check-ins for both.
  */
  assert( pParent==0 || pChild==0 );
  if( pParent==0 ){
    ppOther = &pParent;
    otherRid = pmid;
  }else{
    ppOther = &pChild;
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
     "  cid INTEGER,"                /* A child or mid */
     "  m2 REAL"                     /* Timestamp on the child */
     ");"
  );
}

#if INTERFACE
/* Timestamps might be adjusted slightly to ensure that checkins appear
** on the timeline in chronological order.  This is the maximum amount
** of the adjustment window, in days.
*/
#define AGE_FUDGE_WINDOW      (2.0/86400.0)       /* 2 seconds */

/* This is increment (in days) by which timestamps are adjusted for
** use on the timeline.







|







1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
     "  cid INTEGER,"                /* A child or mid */
     "  m2 REAL"                     /* Timestamp on the child */
     ");"
  );
}

#if INTERFACE
/* Timestamps might be adjusted slightly to ensure that check-ins appear
** on the timeline in chronological order.  This is the maximum amount
** of the adjustment window, in days.
*/
#define AGE_FUDGE_WINDOW      (2.0/86400.0)       /* 2 seconds */

/* This is increment (in days) by which timestamps are adjusted for
** use on the timeline.

Changes to src/name.c.

788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
@ CREATE TEMP TABLE IF NOT EXISTS description(
@   rid INTEGER PRIMARY KEY,       -- RID of the object
@   uuid TEXT,                     -- SHA1 hash of the object
@   ctime DATETIME,                -- Time of creation
@   isPrivate BOOLEAN DEFAULT 0,   -- True for unpublished artifacts
@   type TEXT,                     -- file, checkin, wiki, ticket, etc.
@   summary TEXT,                  -- Summary comment for the object
@   detail TEXT                    -- filename, checkin comment, etc
@ );
;

/*
** Create the description table if it does not already exists.
** Populate fields of this table with descriptions for all artifacts
** whose RID matches the SQL expression in zWhere.
*/
void describe_artifacts(const char *zWhere){
  db_multi_exec("%s", zDescTab/*safe-for-%s*/);

  /* Describe checkins */
  db_multi_exec(
    "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n"
    "SELECT blob.rid, blob.uuid, event.mtime, 'checkin',\n"
    "       'checkin on ' || strftime('%%Y-%%m-%%d %%H:%%M',event.mtime)\n"
    "  FROM event, blob\n"
    " WHERE (event.objid %s) AND event.type='ci'\n"
    "   AND event.objid=blob.rid;",
    zWhere /*safe-for-%s*/
  );

  /* Describe files */







|











|



|







788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
@ CREATE TEMP TABLE IF NOT EXISTS description(
@   rid INTEGER PRIMARY KEY,       -- RID of the object
@   uuid TEXT,                     -- SHA1 hash of the object
@   ctime DATETIME,                -- Time of creation
@   isPrivate BOOLEAN DEFAULT 0,   -- True for unpublished artifacts
@   type TEXT,                     -- file, checkin, wiki, ticket, etc.
@   summary TEXT,                  -- Summary comment for the object
@   detail TEXT                    -- File name, check-in comment, etc
@ );
;

/*
** Create the description table if it does not already exists.
** Populate fields of this table with descriptions for all artifacts
** whose RID matches the SQL expression in zWhere.
*/
void describe_artifacts(const char *zWhere){
  db_multi_exec("%s", zDescTab/*safe-for-%s*/);

  /* Describe check-ins */
  db_multi_exec(
    "INSERT OR IGNORE INTO description(rid,uuid,ctime,type,summary)\n"
    "SELECT blob.rid, blob.uuid, event.mtime, 'checkin',\n"
    "       'check-in on ' || strftime('%%Y-%%m-%%d %%H:%%M',event.mtime)\n"
    "  FROM event, blob\n"
    " WHERE (event.objid %s) AND event.type='ci'\n"
    "   AND event.objid=blob.rid;",
    zWhere /*safe-for-%s*/
  );

  /* Describe files */

Changes to src/path.c.

11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
**
** Author contact information:
**   drh@sqlite.org
**
*******************************************************************************
**
** This file contains code used to trace paths of through the
** directed acyclic graph (DAG) of checkins.
*/
#include "config.h"
#include "path.h"
#include <assert.h>

#if INTERFACE
/* Nodes for the paths through the DAG.







|







11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
**
** Author contact information:
**   drh@sqlite.org
**
*******************************************************************************
**
** This file contains code used to trace paths of through the
** directed acyclic graph (DAG) of check-ins.
*/
#include "config.h"
#include "path.h"
#include <assert.h>

#if INTERFACE
/* Nodes for the paths through the DAG.
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
}

/*
** COMMAND:  test-shortest-path
**
** Usage: %fossil test-shortest-path ?--no-merge? VERSION1 VERSION2
**
** Report the shortest path between two checkins.  If the --no-merge flag
** is used, follow only direct parent-child paths and omit merge links.
*/
void shortest_path_test_cmd(void){
  int iFrom;
  int iTo;
  PathNode *p;
  int n;







|







197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
}

/*
** COMMAND:  test-shortest-path
**
** Usage: %fossil test-shortest-path ?--no-merge? VERSION1 VERSION2
**
** Report the shortest path between two check-ins.  If the --no-merge flag
** is used, follow only direct parent-child paths and omit merge links.
*/
void shortest_path_test_cmd(void){
  int iFrom;
  int iTo;
  PathNode *p;
  int n;
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
  int origName;        /* Original name of file */
  int curName;         /* Current name of the file */
  int newName;         /* Name of file in next version */
  NameChange *pNext;   /* List of all name changes */
};

/*
** Compute all file name changes that occur going from checkin iFrom
** to checkin iTo.
**
** The number of name changes is written into *pnChng.  For each name
** change, two integers are allocated for *piChng.  The first is the
** filename.fnid for the original name as seen in check-in iFrom and
** the second is for new name as it is used in check-in iTo.
**
** Space to hold *piChng is obtained from fossil_malloc() and should







|
|







347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
  int origName;        /* Original name of file */
  int curName;         /* Current name of the file */
  int newName;         /* Name of file in next version */
  NameChange *pNext;   /* List of all name changes */
};

/*
** Compute all file name changes that occur going from check-in iFrom
** to check-in iTo.
**
** The number of name changes is written into *pnChng.  For each name
** change, two integers are allocated for *piChng.  The first is the
** filename.fnid for the original name as seen in check-in iFrom and
** the second is for new name as it is used in check-in iTo.
**
** Space to hold *piChng is obtained from fossil_malloc() and should

Changes to src/printf.c.

204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
}

/*
** Return an appropriate set of flags for wiki_convert() for displaying
** comments on a timeline.  These flag settings are determined by
** configuration parameters.
**
** The altForm2 argument is true for "%!w" (with the "!" alternate-form-2
** flags) and is false for plain "%w".  The ! indicates that the text is
** to be rendered on a form rather than the timeline and that block markup
** is acceptable even if the "timeline-block-markup" setting is false.
*/
static int wiki_convert_flags(int altForm2){
  static int wikiFlags = 0;
  if( wikiFlags==0 ){
    if( altForm2 || db_get_boolean("timeline-block-markup", 0) ){







|
|







204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
}

/*
** Return an appropriate set of flags for wiki_convert() for displaying
** comments on a timeline.  These flag settings are determined by
** configuration parameters.
**
** The altForm2 argument is true for "%!W" (with the "!" alternate-form-2
** flags) and is false for plain "%W".  The ! indicates that the text is
** to be rendered on a form rather than the timeline and that block markup
** is acceptable even if the "timeline-block-markup" setting is false.
*/
static int wiki_convert_flags(int altForm2){
  static int wikiFlags = 0;
  if( wikiFlags==0 ){
    if( altForm2 || db_get_boolean("timeline-block-markup", 0) ){

Changes to src/publish.c.

26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
** COMMAND: unpublished
**
** Usage: %fossil unpublished ?OPTIONS?
**
** Show a list of unpublished or "private" artifacts.  Unpublished artifacts
** will never push and hence will not be shared with collaborators.
**
** By default, this command only shows unpublished checkins.  To show
** all unpublished artifacts, use the --all command-line option.
**
** OPTIONS:
**     --all                   Show all artifacts, not just checkins
*/
void unpublished_cmd(void){
  int bAll = find_option("all",0,0)!=0;

  db_find_and_open_repository(0,0);
  verify_all_options();
  if( bAll ){







|



|







26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
** COMMAND: unpublished
**
** Usage: %fossil unpublished ?OPTIONS?
**
** Show a list of unpublished or "private" artifacts.  Unpublished artifacts
** will never push and hence will not be shared with collaborators.
**
** By default, this command only shows unpublished check-ins.  To show
** all unpublished artifacts, use the --all command-line option.
**
** OPTIONS:
**     --all                   Show all artifacts, not just check-ins
*/
void unpublished_cmd(void){
  int bAll = find_option("all",0,0)!=0;

  db_find_and_open_repository(0,0);
  verify_all_options();
  if( bAll ){
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
**
** Usage: %fossil publish ?--only? TAGS...
**
** Cause artifacts identified by TAGS... to be published (made non-private).
** This can be used (for example) to convert a private branch into a public
** branch, or to publish a bundle that was imported privately.
**
** If any of TAGS names a branch, then all checkins on the most recent
** instance of that branch are included, not just the most recent checkin.
**
** If any of TAGS name checkins then all files and tags associated with
** those checkins are also published automatically.  Except if the --only
** option is used, then only the specific artifacts identified by TAGS
** are published.
**
** If a TAG is already public, this command is a harmless no-op.
*/
void publish_cmd(void){
  int bOnly = find_option("only",0,0)!=0;







|
|

|
|







56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
**
** Usage: %fossil publish ?--only? TAGS...
**
** Cause artifacts identified by TAGS... to be published (made non-private).
** This can be used (for example) to convert a private branch into a public
** branch, or to publish a bundle that was imported privately.
**
** If any of TAGS names a branch, then all check-ins on the most recent
** instance of that branch are included, not just the most recent check-in.
**
** If any of TAGS name check-ins then all files and tags associated with
** those check-ins are also published automatically.  Except if the --only
** option is used, then only the specific artifacts identified by TAGS
** are published.
**
** If a TAG is already public, this command is a harmless no-op.
*/
void publish_cmd(void){
  int bOnly = find_option("only",0,0)!=0;
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
    find_checkin_associates("ok", bExclusive);
  }
  if( bTest ){
    /* If the --test option is used, then do not actually publish any
    ** artifacts.  Instead, just list the artifact information on standard
    ** output.  The --test option is useful for verifying correct operation
    ** of the logic that figures out which artifacts to publish, such as
    ** the find_checkin_associates() routine 
    */
    describe_artifacts_to_stdout("IN ok", 0);
  }else{
    /* Standard behavior is simply to remove the published documents from
    ** the PRIVATE table */
    db_multi_exec(
      "DELETE FROM ok WHERE rid NOT IN private;"







|







97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
    find_checkin_associates("ok", bExclusive);
  }
  if( bTest ){
    /* If the --test option is used, then do not actually publish any
    ** artifacts.  Instead, just list the artifact information on standard
    ** output.  The --test option is useful for verifying correct operation
    ** of the logic that figures out which artifacts to publish, such as
    ** the find_checkin_associates() routine
    */
    describe_artifacts_to_stdout("IN ok", 0);
  }else{
    /* Standard behavior is simply to remove the published documents from
    ** the PRIVATE table */
    db_multi_exec(
      "DELETE FROM ok WHERE rid NOT IN private;"

Changes to src/purge.c.

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
** Author contact information:
**   drh@hwaci.com
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This file contains code used to implement the "purge" command and
** related functionality for removing checkins from a repository.  It also
** manages the graveyard of purged content.
*/
#include "config.h"
#include "purge.h"
#include <assert.h>

/*
** SQL code used to initialize the schema of a bundle.
**
** The purgeevent table contains one entry for each purge event.  For each
** purge event, multiple artifacts might have been removed.  Each removed
** artifact is stored as an entry in the purgeitem table.
**
** The purgeevent and purgeitem tables are not synced, even by the 
** "fossil config" command.  They exist only as a backup in case of a 
** mistaken purge or for content recovery in case there is a bug in the
** purge command.
*/
static const char zPurgeInit[] = 
@ CREATE TABLE IF NOT EXISTS "%w".purgeevent(
@   peid INTEGER PRIMARY KEY,  -- Unique ID for the purge event
@   ctime DATETIME,            -- When purge occurred.  Seconds since 1970.
@   pnotes TEXT                -- Human-readable notes about the purge event
@ );
@ CREATE TABLE IF NOT EXISTS "%w".purgeitem(
@   piid INTEGER PRIMARY KEY,  -- ID for the purge item
@   peid INTEGER REFERENCES purgeevent ON DELETE CASCADE, -- Purge event
@   orid INTEGER,              -- Original RID before purged 
@   uuid TEXT NOT NULL,        -- SHA1 hash of the purged artifact
@   srcid INTEGER,             -- Basis purgeitem for delta compression
@   isPrivate BOOLEAN,         -- True if artifact was originally private
@   sz INT NOT NULL,           -- Uncompressed size of the purged artifact
@   desc TEXT,                 -- Brief description of this artifact
@   data BLOB                  -- Compressed artifact content
@ );







|













|
|



|








|







12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
** Author contact information:
**   drh@hwaci.com
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This file contains code used to implement the "purge" command and
** related functionality for removing check-ins from a repository.  It also
** manages the graveyard of purged content.
*/
#include "config.h"
#include "purge.h"
#include <assert.h>

/*
** SQL code used to initialize the schema of a bundle.
**
** The purgeevent table contains one entry for each purge event.  For each
** purge event, multiple artifacts might have been removed.  Each removed
** artifact is stored as an entry in the purgeitem table.
**
** The purgeevent and purgeitem tables are not synced, even by the
** "fossil config" command.  They exist only as a backup in case of a
** mistaken purge or for content recovery in case there is a bug in the
** purge command.
*/
static const char zPurgeInit[] =
@ CREATE TABLE IF NOT EXISTS "%w".purgeevent(
@   peid INTEGER PRIMARY KEY,  -- Unique ID for the purge event
@   ctime DATETIME,            -- When purge occurred.  Seconds since 1970.
@   pnotes TEXT                -- Human-readable notes about the purge event
@ );
@ CREATE TABLE IF NOT EXISTS "%w".purgeitem(
@   piid INTEGER PRIMARY KEY,  -- ID for the purge item
@   peid INTEGER REFERENCES purgeevent ON DELETE CASCADE, -- Purge event
@   orid INTEGER,              -- Original RID before purged
@   uuid TEXT NOT NULL,        -- SHA1 hash of the purged artifact
@   srcid INTEGER,             -- Basis purgeitem for delta compression
@   isPrivate BOOLEAN,         -- True if artifact was originally private
@   sz INT NOT NULL,           -- Uncompressed size of the purged artifact
@   desc TEXT,                 -- Brief description of this artifact
@   data BLOB                  -- Compressed artifact content
@ );
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
    verify_before_commit(rid);
  }
  db_finalize(&q);

  /* Construct the graveyard and copy the artifacts to be purged into the
  ** graveyard */
  if( moveToGraveyard ){
    db_multi_exec(zPurgeInit /*works-like:"%w%w"*/, 
                  db_name("repository"), db_name("repository"));
    db_multi_exec(
      "INSERT INTO purgeevent(ctime,pnotes) VALUES(now(),%Q)", zNote
    );
    peid = db_last_insert_rowid();
    db_prepare(&q, "SELECT rid FROM delta WHERE rid IN \"%w\""
                   " AND srcid NOT IN \"%w\"", zTab, zTab);







|







120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
    verify_before_commit(rid);
  }
  db_finalize(&q);

  /* Construct the graveyard and copy the artifacts to be purged into the
  ** graveyard */
  if( moveToGraveyard ){
    db_multi_exec(zPurgeInit /*works-like:"%w%w"*/,
                  db_name("repository"), db_name("repository"));
    db_multi_exec(
      "INSERT INTO purgeevent(ctime,pnotes) VALUES(now(),%Q)", zNote
    );
    peid = db_last_insert_rowid();
    db_prepare(&q, "SELECT rid FROM delta WHERE rid IN \"%w\""
                   " AND srcid NOT IN \"%w\"", zTab, zTab);
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238

  /* Mission accomplished */
  db_end_transaction(0);
  return peid;
}

/*
** The TEMP table named zTab contains RIDs for a set of checkins.  
**
** Check to see if any checkin in zTab is a baseline manifest for some
** delta manifest that is not in zTab.  Return true if zTab contains a
** baseline for a delta that is not in zTab.
**
** This is a database integrity preservation check.  The checkins in zTab
** are about to be deleted or otherwise made inaccessible.  This routine
** is checking to ensure that purging the checkins in zTab will not delete
** a baseline manifest out from under a delta.
*/
int purge_baseline_out_from_under_delta(const char *zTab){
  if( !db_table_has_column("repository","plink","baseid") ){
    /* Skip this check if the current database is an older schema that
    ** does not contain the PLINK.BASEID field. */
    return 0;
  }else{
    return db_int(0,
      "SELECT 1 FROM plink WHERE baseid IN \"%w\" AND cid NOT IN \"%w\"",
      zTab, zTab);
  }
}


/*
** The TEMP table named zTab contains the RIDs for a set of checkin
** artifacts.  Expand this set (by adding new entries to zTab) to include
** all other artifacts that are used the set of checkins in
** the original list.
**
** If the bExclusive flag is true, then the set is only expanded by
** artifacts that are used exclusively by the checkins in the set.
** When bExclusive is false, then all artifacts used by the checkins
** are added even if those artifacts are also used by other checkins
** not in the set.
**
** The "fossil publish" command with the (undocumented) --test and
** --exclusive options can be used for interactiving testing of this
** function.
*/
void find_checkin_associates(const char *zTab, int bExclusive){







|

|



|

|
















|

|



|
|
|







191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238

  /* Mission accomplished */
  db_end_transaction(0);
  return peid;
}

/*
** The TEMP table named zTab contains RIDs for a set of check-ins.
**
** Check to see if any check-in in zTab is a baseline manifest for some
** delta manifest that is not in zTab.  Return true if zTab contains a
** baseline for a delta that is not in zTab.
**
** This is a database integrity preservation check.  The check-ins in zTab
** are about to be deleted or otherwise made inaccessible.  This routine
** is checking to ensure that purging the check-ins in zTab will not delete
** a baseline manifest out from under a delta.
*/
int purge_baseline_out_from_under_delta(const char *zTab){
  if( !db_table_has_column("repository","plink","baseid") ){
    /* Skip this check if the current database is an older schema that
    ** does not contain the PLINK.BASEID field. */
    return 0;
  }else{
    return db_int(0,
      "SELECT 1 FROM plink WHERE baseid IN \"%w\" AND cid NOT IN \"%w\"",
      zTab, zTab);
  }
}


/*
** The TEMP table named zTab contains the RIDs for a set of check-in
** artifacts.  Expand this set (by adding new entries to zTab) to include
** all other artifacts that are used the set of check-ins in
** the original list.
**
** If the bExclusive flag is true, then the set is only expanded by
** artifacts that are used exclusively by the check-ins in the set.
** When bExclusive is false, then all artifacts used by the check-ins
** are added even if those artifacts are also used by other check-ins
** not in the set.
**
** The "fossil publish" command with the (undocumented) --test and
** --exclusive options can be used for interactiving testing of this
** function.
*/
void find_checkin_associates(const char *zTab, int bExclusive){
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
      "DELETE FROM \"%w_tags\""
      " WHERE tid IN (SELECT srcid FROM tagxref"
                     " WHERE srcid IN \"%w_tags\""
                     "   AND rid NOT IN \"%w\")",
      zTab, zTab, zTab
    );
  }
  
  /* Transfer the extra artifacts into zTab */
  db_multi_exec(
    "INSERT OR IGNORE INTO \"%w\" SELECT fid FROM \"%w_files\";"
    "INSERT OR IGNORE INTO \"%w\" SELECT tid FROM \"%w_tags\";"
    "DROP TABLE \"%w_files\";"
    "DROP TABLE \"%w_tags\";",
    zTab, zTab, zTab, zTab, zTab, zTab







|







273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
      "DELETE FROM \"%w_tags\""
      " WHERE tid IN (SELECT srcid FROM tagxref"
                     " WHERE srcid IN \"%w_tags\""
                     "   AND rid NOT IN \"%w\")",
      zTab, zTab, zTab
    );
  }

  /* Transfer the extra artifacts into zTab */
  db_multi_exec(
    "INSERT OR IGNORE INTO \"%w\" SELECT fid FROM \"%w_files\";"
    "INSERT OR IGNORE INTO \"%w\" SELECT tid FROM \"%w_tags\";"
    "DROP TABLE \"%w_files\";"
    "DROP TABLE \"%w_tags\";",
    zTab, zTab, zTab, zTab, zTab, zTab
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
  assert( pBasis!=0 || iSrc==0 );
  if( iSrc>0 ){
    if( bag_find(&busy, iSrc) ){
      fossil_fatal("delta loop while uncompressing purged artifacts");
    }
    bag_insert(&busy, iSrc);
  }
  db_prepare(&q, 
     "SELECT uuid, data, isPrivate, ix.piid"
     "  FROM ix, purgeitem"
     " WHERE ix.srcid=%d"
     "   AND ix.piid=purgeitem.piid;",
     iSrc
  );
  while( db_step(&q)==SQLITE_ROW ){







|







375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
  assert( pBasis!=0 || iSrc==0 );
  if( iSrc>0 ){
    if( bag_find(&busy, iSrc) ){
      fossil_fatal("delta loop while uncompressing purged artifacts");
    }
    bag_insert(&busy, iSrc);
  }
  db_prepare(&q,
     "SELECT uuid, data, isPrivate, ix.piid"
     "  FROM ix, purgeitem"
     " WHERE ix.srcid=%d"
     "   AND ix.piid=purgeitem.piid;",
     iSrc
  );
  while( db_step(&q)==SQLITE_ROW ){
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
**   fossil purge cat UUID...
**
**      Write the content of one or more artifacts in the graveyard onto
**      standard output.
**
**   fossil purge ?checkins? TAGS... ?OPTIONS?
**
**      Move the checkins identified by TAGS and all of their descendants
**      out of the repository and into the graveyard.  The "checkins" 
**      subcommand keyword is option and can be omitted as long as TAGS
**      does not conflict with any other subcommand.
**
**      If a TAGS includes a branch name then it means all the checkins
**      on the most recent occurrance of that branch.
**
**           --explain         Make no changes, but show what would happen.
**           --dry-run         Make no chances.
**
**   fossil purge list|ls ?-l?
**







|
|



|







433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
**   fossil purge cat UUID...
**
**      Write the content of one or more artifacts in the graveyard onto
**      standard output.
**
**   fossil purge ?checkins? TAGS... ?OPTIONS?
**
**      Move the check-ins identified by TAGS and all of their descendants
**      out of the repository and into the graveyard.  The "checkins"
**      subcommand keyword is option and can be omitted as long as TAGS
**      does not conflict with any other subcommand.
**
**      If a TAGS includes a branch name then it means all the check-ins
**      on the most recent occurrance of that branch.
**
**           --explain         Make no changes, but show what would happen.
**           --dry-run         Make no chances.
**
**   fossil purge list|ls ?-l?
**
569
570
571
572
573
574
575
576
577
578
579
580
581
582
    nCkin = db_int(0, "SELECT count(*) FROM ok");
    find_checkin_associates("ok", 1);
    nArtifact = db_int(0, "SELECT count(*) FROM ok");
    if( explainOnly ){
      describe_artifacts_to_stdout("IN ok", 0);
    }else{
      int peid = purge_artifact_list("ok","",1);
      fossil_print("%d checkins and %d artifacts purged.\n", nCkin, nArtifact);
      fossil_print("undoable using \"%s purge undo %d\".\n",
                    g.nameOfExe, peid);
    }
    db_end_transaction(explainOnly||dryRun);
  }
}







|






569
570
571
572
573
574
575
576
577
578
579
580
581
582
    nCkin = db_int(0, "SELECT count(*) FROM ok");
    find_checkin_associates("ok", 1);
    nArtifact = db_int(0, "SELECT count(*) FROM ok");
    if( explainOnly ){
      describe_artifacts_to_stdout("IN ok", 0);
    }else{
      int peid = purge_artifact_list("ok","",1);
      fossil_print("%d check-ins and %d artifacts purged.\n", nCkin, nArtifact);
      fossil_print("undoable using \"%s purge undo %d\".\n",
                    g.nameOfExe, peid);
    }
    db_end_transaction(explainOnly||dryRun);
  }
}

Changes to src/rebuild.c.

437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
  return errCnt;
}

/*
** Attempt to convert more full-text blobs into delta-blobs for
** storage efficiency.
*/
static void extra_deltification(void){
  Stmt q;
  int topid, previd, rid;
  int prevfnid, fnid;
  db_begin_transaction();
  db_prepare(&q,
     "SELECT rid FROM event, blob"
     " WHERE blob.rid=event.objid"







|







437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
  return errCnt;
}

/*
** Attempt to convert more full-text blobs into delta-blobs for
** storage efficiency.
*/
void extra_deltification(void){
  Stmt q;
  int topid, previd, rid;
  int prevfnid, fnid;
  db_begin_transaction();
  db_prepare(&q,
     "SELECT rid FROM event, blob"
     " WHERE blob.rid=event.objid"

Changes to src/rss.c.

24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

/*
** WEBPAGE: timeline.rss
** URL:  /timeline.rss?y=TYPE&n=LIMIT&tkt=UUID&tag=TAG&wiki=NAME&name=FILENAME
**
** Produce an RSS feed of the timeline.
**
** TYPE may be: all, ci (show checkins only), t (show tickets only),
** w (show wiki only).
**
** LIMIT is the number of items to show.
**
** tkt=UUID filters for only those events for the specified ticket. tag=TAG
** filters for a tag, and wiki=NAME for a wiki page. Only one may be used.
**







|







24
25
26
27
28
29
30
31
32
33
34
35
36
37
38

/*
** WEBPAGE: timeline.rss
** URL:  /timeline.rss?y=TYPE&n=LIMIT&tkt=UUID&tag=TAG&wiki=NAME&name=FILENAME
**
** Produce an RSS feed of the timeline.
**
** TYPE may be: all, ci (show check-ins only), t (show tickets only),
** w (show wiki only).
**
** LIMIT is the number of items to show.
**
** tkt=UUID filters for only those events for the specified ticket. tag=TAG
** filters for a tag, and wiki=NAME for a wiki page. Only one may be used.
**
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
**
** Usage: %fossil rss ?OPTIONS?
**
** The CLI variant of the /timeline.rss page, this produces an RSS
** feed of the timeline to stdout. Options:
**
** -type|y FLAG
**    may be: all (default), ci (show checkins only), t (show tickets only),
**    w (show wiki only).
**
** -limit|n LIMIT
**   The maximum number of items to show.
**
** -tkt UUID
**    Filters for only those events for the specified ticket.







|







211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
**
** Usage: %fossil rss ?OPTIONS?
**
** The CLI variant of the /timeline.rss page, this produces an RSS
** feed of the timeline to stdout. Options:
**
** -type|y FLAG
**    may be: all (default), ci (show check-ins only), t (show tickets only),
**    w (show wiki only).
**
** -limit|n LIMIT
**   The maximum number of items to show.
**
** -tkt UUID
**    Filters for only those events for the specified ticket.

Changes to src/schema.c.

226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
@ -- Filenames
@ --
@ CREATE TABLE filename(
@   fnid INTEGER PRIMARY KEY,    -- Filename ID
@   name TEXT UNIQUE             -- Name of file page
@ );
@
@ -- Linkages between checkins, files created by each checkin, and
@ -- the names of those files.
@ --
@ -- Each entry represents a file that changed content from pid to fid
@ -- due to the check-in that goes from pmid to mid.  fnid is the name
@ -- of the file in the mid check-in.  If the file was renamed as part
@ -- of the mid check-in, then pfnid is the previous filename.
@
@ -- There can be multiple entries for (mid,fid) if the mid checkin was
@ -- a merge.  Entries with isaux==0 are from the primary parent.  Merge
@ -- parents have isaux set to true.
@ --
@ -- Field name mnemonics:
@ --    mid = Manifest ID.  (Each check-in is stored as a "Manifest")
@ --    fid = File ID.
@ --    pmid = Parent Manifest ID.
@ --    pid = Parent file ID.
@ --    fnid = File Name ID.
@ --    pfnid = Parent File Name ID.
@ --    isaux = pmid IS AUXiliary parent, not primary parent
@ --
@ -- pid==0 if the file is added by checkin mid.
@ -- fid==0 if the file is removed by checkin mid.
@ --
@ CREATE TABLE mlink(
@   mid INTEGER REFERENCES plink(cid),  -- Checkin that contains fid
@   fid INTEGER REFERENCES blob,        -- New file content. 0 if deleted
@   pmid INTEGER REFERENCES plink(cid), -- Checkin that contains pid
@   pid INTEGER REFERENCES blob,        -- Prev file content. 0 if new
@   fnid INTEGER REFERENCES filename,   -- Name of the file
@   pfnid INTEGER REFERENCES filename,  -- Previous name. 0 if unchanged
@   mperm INTEGER,                      -- File permissions.  1==exec
@   isaux BOOLEAN DEFAULT 0             -- TRUE if pmid is the primary
@ );
@ CREATE INDEX mlink_i1 ON mlink(mid);
@ CREATE INDEX mlink_i2 ON mlink(fnid);
@ CREATE INDEX mlink_i3 ON mlink(fid);
@ CREATE INDEX mlink_i4 ON mlink(pid);
@
@ -- Parent/child linkages between checkins
@ --
@ CREATE TABLE plink(
@   pid INTEGER REFERENCES blob,    -- Parent manifest
@   cid INTEGER REFERENCES blob,    -- Child manifest
@   isprim BOOLEAN,                 -- pid is the primary parent of cid
@   mtime DATETIME,                 -- the date/time stamp on cid.  Julian day.
@   baseid INTEGER REFERENCES blob, -- Baseline if cid is a delta manifest.
@   UNIQUE(pid, cid)
@ );
@ CREATE INDEX plink_i2 ON plink(cid,pid);
@
@ -- A "leaf" checkin is a checkin that has no children in the same
@ -- branch.  The set of all leaves is easily computed with a join,
@ -- between the plink and tagxref tables, but it is a slower join for
@ -- very large repositories (repositories with 100,000 or more checkins)
@ -- and so it makes sense to precompute the set of leaves.  There is
@ -- one entry in the following table for each leaf.
@ --
@ CREATE TABLE leaf(rid INTEGER PRIMARY KEY);
@
@ -- Events used to generate a timeline
@ --







|







|












|
|


|

|











|











|


|







226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
@ -- Filenames
@ --
@ CREATE TABLE filename(
@   fnid INTEGER PRIMARY KEY,    -- Filename ID
@   name TEXT UNIQUE             -- Name of file page
@ );
@
@ -- Linkages between check-ins, files created by each check-in, and
@ -- the names of those files.
@ --
@ -- Each entry represents a file that changed content from pid to fid
@ -- due to the check-in that goes from pmid to mid.  fnid is the name
@ -- of the file in the mid check-in.  If the file was renamed as part
@ -- of the mid check-in, then pfnid is the previous filename.
@
@ -- There can be multiple entries for (mid,fid) if the mid check-in was
@ -- a merge.  Entries with isaux==0 are from the primary parent.  Merge
@ -- parents have isaux set to true.
@ --
@ -- Field name mnemonics:
@ --    mid = Manifest ID.  (Each check-in is stored as a "Manifest")
@ --    fid = File ID.
@ --    pmid = Parent Manifest ID.
@ --    pid = Parent file ID.
@ --    fnid = File Name ID.
@ --    pfnid = Parent File Name ID.
@ --    isaux = pmid IS AUXiliary parent, not primary parent
@ --
@ -- pid==0 if the file is added by check-in mid.
@ -- fid==0 if the file is removed by check-in mid.
@ --
@ CREATE TABLE mlink(
@   mid INTEGER REFERENCES plink(cid),  -- Check-in that contains fid
@   fid INTEGER REFERENCES blob,        -- New file content. 0 if deleted
@   pmid INTEGER REFERENCES plink(cid), -- Check-in that contains pid
@   pid INTEGER REFERENCES blob,        -- Prev file content. 0 if new
@   fnid INTEGER REFERENCES filename,   -- Name of the file
@   pfnid INTEGER REFERENCES filename,  -- Previous name. 0 if unchanged
@   mperm INTEGER,                      -- File permissions.  1==exec
@   isaux BOOLEAN DEFAULT 0             -- TRUE if pmid is the primary
@ );
@ CREATE INDEX mlink_i1 ON mlink(mid);
@ CREATE INDEX mlink_i2 ON mlink(fnid);
@ CREATE INDEX mlink_i3 ON mlink(fid);
@ CREATE INDEX mlink_i4 ON mlink(pid);
@
@ -- Parent/child linkages between check-ins
@ --
@ CREATE TABLE plink(
@   pid INTEGER REFERENCES blob,    -- Parent manifest
@   cid INTEGER REFERENCES blob,    -- Child manifest
@   isprim BOOLEAN,                 -- pid is the primary parent of cid
@   mtime DATETIME,                 -- the date/time stamp on cid.  Julian day.
@   baseid INTEGER REFERENCES blob, -- Baseline if cid is a delta manifest.
@   UNIQUE(pid, cid)
@ );
@ CREATE INDEX plink_i2 ON plink(cid,pid);
@
@ -- A "leaf" check-in is a check-in that has no children in the same
@ -- branch.  The set of all leaves is easily computed with a join,
@ -- between the plink and tagxref tables, but it is a slower join for
@ -- very large repositories (repositories with 100,000 or more check-ins)
@ -- and so it makes sense to precompute the set of leaves.  There is
@ -- one entry in the following table for each leaf.
@ --
@ CREATE TABLE leaf(rid INTEGER PRIMARY KEY);
@
@ -- Events used to generate a timeline
@ --
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
@ -- when a check-in comment refers to a ticket) an entry is made in
@ -- the following table for that hyperlink.  This table is used to
@ -- facilitate the display of "back links".
@ --
@ CREATE TABLE backlink(
@   target TEXT,           -- Where the hyperlink points to
@   srctype INT,           -- 0: check-in  1: ticket  2: wiki
@   srcid INT,             -- rid for checkin or wiki.  tkt_id for ticket.
@   mtime TIMESTAMP,       -- time that the hyperlink was added. Julian day.
@   UNIQUE(target, srctype, srcid)
@ );
@ CREATE INDEX backlink_src ON backlink(srcid, srctype);
@
@ -- Each attachment is an entry in the following table.  Only
@ -- the most recent attachment (identified by the D card) is saved.







|







391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
@ -- when a check-in comment refers to a ticket) an entry is made in
@ -- the following table for that hyperlink.  This table is used to
@ -- facilitate the display of "back links".
@ --
@ CREATE TABLE backlink(
@   target TEXT,           -- Where the hyperlink points to
@   srctype INT,           -- 0: check-in  1: ticket  2: wiki
@   srcid INT,             -- rid for check-in or wiki.  tkt_id for ticket.
@   mtime TIMESTAMP,       -- time that the hyperlink was added. Julian day.
@   UNIQUE(target, srctype, srcid)
@ );
@ CREATE INDEX backlink_src ON backlink(srcid, srctype);
@
@ -- Each attachment is an entry in the following table.  Only
@ -- the most recent attachment (identified by the D card) is saved.
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
# define TAG_USER       3     /* User who made a checking */
# define TAG_DATE       4     /* The date of a check-in */
# define TAG_HIDDEN     5     /* Do not display in timeline */
# define TAG_PRIVATE    6     /* Do not sync */
# define TAG_CLUSTER    7     /* A cluster */
# define TAG_BRANCH     8     /* Value is name of the current branch */
# define TAG_CLOSED     9     /* Do not display this check-in as a leaf */
# define TAG_PARENT     10    /* Change to parentage on a checkin */
#endif
#if EXPORT_INTERFACE
# define MAX_INT_TAG    16    /* The largest pre-assigned tag id */
#endif

/*
** The schema for the local FOSSIL database file found at the root







|







463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
# define TAG_USER       3     /* User who made a checking */
# define TAG_DATE       4     /* The date of a check-in */
# define TAG_HIDDEN     5     /* Do not display in timeline */
# define TAG_PRIVATE    6     /* Do not sync */
# define TAG_CLUSTER    7     /* A cluster */
# define TAG_BRANCH     8     /* Value is name of the current branch */
# define TAG_CLOSED     9     /* Do not display this check-in as a leaf */
# define TAG_PARENT     10    /* Change to parentage on a check-in */
#endif
#if EXPORT_INTERFACE
# define MAX_INT_TAG    16    /* The largest pre-assigned tag id */
#endif

/*
** The schema for the local FOSSIL database file found at the root

Changes to src/search.c.

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
** Author contact information:
**   drh@hwaci.com
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This file contains code to implement a very simple search function
** against timeline comments, checkin content, wiki pages, and/or tickets.
**
** The search is full-text like in that it is looking for words and ignores
** punctuation and capitalization.  But it is more akin to "grep" in that
** it scans the entire corpus for the search, and it does not support the
** full functionality of FTS4.
*/
#include "config.h"







|







12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
** Author contact information:
**   drh@hwaci.com
**   http://www.hwaci.com/drh/
**
*******************************************************************************
**
** This file contains code to implement a very simple search function
** against timeline comments, check-in content, wiki pages, and/or tickets.
**
** The search is full-text like in that it is looking for words and ignores
** punctuation and capitalization.  But it is more akin to "grep" in that
** it scans the entire corpus for the search, and it does not support the
** full functionality of FTS4.
*/
#include "config.h"
567
568
569
570
571
572
573
574
575
576

577
578
579
580
581
582
583

  db_multi_exec(
     "CREATE TEMP TABLE srch(rid,uuid,date,comment,x);"
     "CREATE INDEX srch_idx1 ON srch(x);"
     "INSERT INTO srch(rid,uuid,date,comment,x)"
     "   SELECT blob.rid, uuid, datetime(event.mtime%s),"
     "          coalesce(ecomment,comment),"
     "          score(coalesce(ecomment,comment)) AS y"
     "     FROM event, blob"
     "    WHERE blob.rid=event.objid AND y>0;",

     timeline_utc()
  );
  iBest = db_int(0, "SELECT max(x) FROM srch");
  blob_append(&sql,
              "SELECT rid, uuid, date, comment, 0, 0 FROM srch "
              "WHERE 1 ", -1);
  if(!fAll){







|

|
>







567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584

  db_multi_exec(
     "CREATE TEMP TABLE srch(rid,uuid,date,comment,x);"
     "CREATE INDEX srch_idx1 ON srch(x);"
     "INSERT INTO srch(rid,uuid,date,comment,x)"
     "   SELECT blob.rid, uuid, datetime(event.mtime%s),"
     "          coalesce(ecomment,comment),"
     "          search_score()"
     "     FROM event, blob"
     "    WHERE blob.rid=event.objid"
     "      AND search_match(coalesce(ecomment,comment));",
     timeline_utc()
  );
  iBest = db_int(0, "SELECT max(x) FROM srch");
  blob_append(&sql,
              "SELECT rid, uuid, date, comment, 0, 0 FROM srch "
              "WHERE 1 ", -1);
  if(!fAll){
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
  }
}

/*
** Number of significant bits in a u32
*/
static int nbits(u32 x){
  int n = 0; 
  while( x ){ n++; x >>= 1; }
  return n;
}

/*
** Implemenation of the rank() function used with rank(matchinfo(*,'pcsx')).
*/







|







727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
  }
}

/*
** Number of significant bits in a u32
*/
static int nbits(u32 x){
  int n = 0;
  while( x ){ n++; x >>= 1; }
  return n;
}

/*
** Implemenation of the rank() function used with rank(matchinfo(*,'pcsx')).
*/
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
      }
      break;
    }
  }
}

/*
** This routine is a wrapper around search_stext().  
**
** This routine looks up the search text, stores it in an internal
** buffer, and returns a pointer to the text.  Subsequent requests
** for the same document return the same pointer.  The returned pointer
** is valid until the next invocation of this routine.  Call this routine
** with an eType of 0 to clear the cache.
*/







|







1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
      }
      break;
    }
  }
}

/*
** This routine is a wrapper around search_stext().
**
** This routine looks up the search text, stores it in an internal
** buffer, and returns a pointer to the text.  Subsequent requests
** for the same document return the same pointer.  The returned pointer
** is valid until the next invocation of this routine.  Call this routine
** with an eType of 0 to clear the cache.
*/
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
      }
    }
  }
  if( iCmd==5 ){
    if( g.argc<4 ) usage("porter ON/OFF");
    db_set_int("search-stemmer", is_truth(g.argv[3]), 0);
  }
     

  /* destroy or rebuild the index, if requested */
  if( iAction>=1 ){
    search_drop_index();
  }
  if( iAction>=2 ){
    search_rebuild_index();







|







1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
      }
    }
  }
  if( iCmd==5 ){
    if( g.argc<4 ) usage("porter ON/OFF");
    db_set_int("search-stemmer", is_truth(g.argv[3]), 0);
  }


  /* destroy or rebuild the index, if requested */
  if( iAction>=1 ){
    search_drop_index();
  }
  if( iAction>=2 ){
    search_rebuild_index();

Changes to src/shell.c.

1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
            fprintf(pArg->out,"%s\n", sqlite3_column_text(pExplain, 3));
          }
        }
        sqlite3_finalize(pExplain);
        sqlite3_free(zEQP);
      }

#if USE_SYSTEM_SQLITE+0==1
      /* Output TESTCTRL_EXPLAIN text of requested */
      if( pArg && pArg->mode==MODE_Explain && sqlite3_libversion_number()<3008007 ){
        const char *zExplain = 0;
        sqlite3_test_control(SQLITE_TESTCTRL_EXPLAIN_STMT, pStmt, &zExplain);
        if( zExplain && zExplain[0] ){
          fprintf(pArg->out, "%s", zExplain);
        }
      }
#endif

      /* If the shell is currently in ".explain" mode, gather the extra
      ** data required to add indents to the output.*/
      if( pArg && pArg->mode==MODE_Explain ){
        explain_data_prepare(pArg, pStmt);
      }

      /* perform the first step.  this will tell us if we







<
<
<
<
<
<
<
<
<
<
<







1505
1506
1507
1508
1509
1510
1511











1512
1513
1514
1515
1516
1517
1518
            fprintf(pArg->out,"%s\n", sqlite3_column_text(pExplain, 3));
          }
        }
        sqlite3_finalize(pExplain);
        sqlite3_free(zEQP);
      }












      /* If the shell is currently in ".explain" mode, gather the extra
      ** data required to add indents to the output.*/
      if( pArg && pArg->mode==MODE_Explain ){
        explain_data_prepare(pArg, pStmt);
      }

      /* perform the first step.  this will tell us if we
3678
3679
3680
3681
3682
3683
3684

3685
3686
3687
3688
3689
3690
3691
      { "always",                SQLITE_TESTCTRL_ALWAYS                 },
      { "reserve",               SQLITE_TESTCTRL_RESERVE                },
      { "optimizations",         SQLITE_TESTCTRL_OPTIMIZATIONS          },
      { "iskeyword",             SQLITE_TESTCTRL_ISKEYWORD              },
      { "scratchmalloc",         SQLITE_TESTCTRL_SCRATCHMALLOC          },
      { "byteorder",             SQLITE_TESTCTRL_BYTEORDER              },
      { "never_corrupt",         SQLITE_TESTCTRL_NEVER_CORRUPT          },

    };
    int testctrl = -1;
    int rc = 0;
    int i, n;
    open_db(p, 0);

    /* convert testctrl text option to value. allow any unique prefix







>







3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
      { "always",                SQLITE_TESTCTRL_ALWAYS                 },
      { "reserve",               SQLITE_TESTCTRL_RESERVE                },
      { "optimizations",         SQLITE_TESTCTRL_OPTIMIZATIONS          },
      { "iskeyword",             SQLITE_TESTCTRL_ISKEYWORD              },
      { "scratchmalloc",         SQLITE_TESTCTRL_SCRATCHMALLOC          },
      { "byteorder",             SQLITE_TESTCTRL_BYTEORDER              },
      { "never_corrupt",         SQLITE_TESTCTRL_NEVER_CORRUPT          },
      { "imposter",              SQLITE_TESTCTRL_IMPOSTER               },
    };
    int testctrl = -1;
    int rc = 0;
    int i, n;
    open_db(p, 0);

    /* convert testctrl text option to value. allow any unique prefix
3769
3770
3771
3772
3773
3774
3775












3776
3777
3778
3779
3780
3781
3782
            fprintf(p->out, "%d (0x%08x)\n", rc, rc);
          } else {
            fprintf(stderr,"Error: testctrl %s takes a single char * option\n",
                            azArg[1]);
          }
          break;
#endif













        case SQLITE_TESTCTRL_BITVEC_TEST:         
        case SQLITE_TESTCTRL_FAULT_INSTALL:       
        case SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS: 
        case SQLITE_TESTCTRL_SCRATCHMALLOC:       
        default:
          fprintf(stderr,"Error: CLI support for testctrl %s not implemented\n",







>
>
>
>
>
>
>
>
>
>
>
>







3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
            fprintf(p->out, "%d (0x%08x)\n", rc, rc);
          } else {
            fprintf(stderr,"Error: testctrl %s takes a single char * option\n",
                            azArg[1]);
          }
          break;
#endif

        case SQLITE_TESTCTRL_IMPOSTER:
          if( nArg==5 ){
            rc = sqlite3_test_control(testctrl, p->db, 
                          azArg[2],
                          integerValue(azArg[3]),
                          integerValue(azArg[4]));
          }else{
            fprintf(stderr,"Usage: .testctrl initmode dbName onoff tnum\n");
            rc = 1;
          }
          break;

        case SQLITE_TESTCTRL_BITVEC_TEST:         
        case SQLITE_TESTCTRL_FAULT_INSTALL:       
        case SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS: 
        case SQLITE_TESTCTRL_SCRATCHMALLOC:       
        default:
          fprintf(stderr,"Error: CLI support for testctrl %s not implemented\n",

Changes to src/sitemap.c.

39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
  @ <li>%z(href("%R/home"))Home Page</a>
  @   <ul>
  @   <li>%z(href("%R/docsrc"))Search Project Documentation</a></li>
  @   </ul></li>
  @ <li>%z(href("%R/tree"))File Browser</a></li>
  @   <ul>
  @   <li>%z(href("%R/tree?type=tree&ci=trunk"))Tree-view,
  @        Trunk Checkin</a></li>
  @   <li>%z(href("%R/tree?type=flat"))Flat-view</a></li>
  @   <li>%z(href("%R/fileage?name=trunk"))File ages for Trunk</a></li>
  @ </ul>
  @ <li>%z(href("%R/timeline?n=200"))Project Timeline</a></li>
  @ <ul>
  @   <li>%z(href("%R/timeline?a=1970-01-01&y=ci&n=10"))First 10 
  @        checkins</a></li>
  @   <li>%z(href("%R/timeline?n=all&namechng"))All checkins with file name
  @        changes</a></li>
  @   <li>%z(href("%R/reports"))Activity Reports</a></li>
  @ </ul>
  @ <li>%z(href("%R/brlist"))Branches</a></li>
  @ <ul>
  @   <li>%z(href("%R/leaves"))Leaf Checkins</a></li>
  @   <li>%z(href("%R/taglist"))List of Tags</a></li>
  @ </ul>
  @ </li>
  @ <li>%z(href("%R/wikihelp"))Wiki</a>
  @   <ul>
  @     <li>%z(href("%R/wikisrch"))Wiki Search</a></li>
  @     <li>%z(href("%R/wcontent"))List of Wiki Pages</a></li>







|






|
|





|







39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
  @ <li>%z(href("%R/home"))Home Page</a>
  @   <ul>
  @   <li>%z(href("%R/docsrc"))Search Project Documentation</a></li>
  @   </ul></li>
  @ <li>%z(href("%R/tree"))File Browser</a></li>
  @   <ul>
  @   <li>%z(href("%R/tree?type=tree&ci=trunk"))Tree-view,
  @        Trunk Check-in</a></li>
  @   <li>%z(href("%R/tree?type=flat"))Flat-view</a></li>
  @   <li>%z(href("%R/fileage?name=trunk"))File ages for Trunk</a></li>
  @ </ul>
  @ <li>%z(href("%R/timeline?n=200"))Project Timeline</a></li>
  @ <ul>
  @   <li>%z(href("%R/timeline?a=1970-01-01&y=ci&n=10"))First 10 
  @        check-ins</a></li>
  @   <li>%z(href("%R/timeline?n=all&namechng"))All check-ins with file name
  @        changes</a></li>
  @   <li>%z(href("%R/reports"))Activity Reports</a></li>
  @ </ul>
  @ <li>%z(href("%R/brlist"))Branches</a></li>
  @ <ul>
  @   <li>%z(href("%R/leaves"))Leaf Check-ins</a></li>
  @   <li>%z(href("%R/taglist"))List of Tags</a></li>
  @ </ul>
  @ </li>
  @ <li>%z(href("%R/wikihelp"))Wiki</a>
  @   <ul>
  @     <li>%z(href("%R/wikisrch"))Wiki Search</a></li>
  @     <li>%z(href("%R/wcontent"))List of Wiki Pages</a></li>
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
  @   <li>%z(href("%R/modreq"))Pending Moderation Requests</a></li>
  @   <li>%z(href("%R/admin_log"))Admin log</a></li>
  @   <li>%z(href("%R/cachestat"))Status of the web-page cache</a></li>
  @   </ul></li>
  @ <li>Test Pages
  @   <ul>
  @   <li>%z(href("%R/test_env"))CGI Environment Test</a></li>
  @   <li>%z(href("%R/test_timewarps"))List of "Timewarp" Checkins</a></li>
  @   <li>%z(href("%R/test-rename-list"))List of file renames</a></li>
  @   <li>%z(href("%R/hash-color-test"))Page to experiment with the automatic
  @       colors assigned to branch names</a>
  @   </ul></li>
  @ </ul></li>
  style_footer();
}







|







100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
  @   <li>%z(href("%R/modreq"))Pending Moderation Requests</a></li>
  @   <li>%z(href("%R/admin_log"))Admin log</a></li>
  @   <li>%z(href("%R/cachestat"))Status of the web-page cache</a></li>
  @   </ul></li>
  @ <li>Test Pages
  @   <ul>
  @   <li>%z(href("%R/test_env"))CGI Environment Test</a></li>
  @   <li>%z(href("%R/test_timewarps"))List of "Timewarp" Check-ins</a></li>
  @   <li>%z(href("%R/test-rename-list"))List of file renames</a></li>
  @   <li>%z(href("%R/hash-color-test"))Page to experiment with the automatic
  @       colors assigned to branch names</a>
  @   </ul></li>
  @ </ul></li>
  style_footer();
}

Changes to src/skins.c.

46
47
48
49
50
51
52

53
54
55
56
57
58
59
  { "Plain Gray, No Logo",               "plain_gray",        0, 0 },
  { "Khaki, No Logo",                    "khaki",             0, 0 },
  { "Black & White, Menu on Left",       "black_and_white",   0, 0 },
  { "Shadow boxes & Rounded Corners",    "rounded1",          0, 0 },
  { "Enhanced Default",                  "enhanced1",         0, 0 },
  { "San Francisco Modern",              "etienne1",          0, 0 },
  { "Eagle",                             "eagle",             1, 0 },

};

/*
** Alternative skins can be specified in the CGI script or by options
** on the "http", "ui", and "server" commands.  The alternative skin
** name must be one of the aBuiltinSkin[].zLabel names.  If there is
** a match, that alternative is used.







>







46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
  { "Plain Gray, No Logo",               "plain_gray",        0, 0 },
  { "Khaki, No Logo",                    "khaki",             0, 0 },
  { "Black & White, Menu on Left",       "black_and_white",   0, 0 },
  { "Shadow boxes & Rounded Corners",    "rounded1",          0, 0 },
  { "Enhanced Default",                  "enhanced1",         0, 0 },
  { "San Francisco Modern",              "etienne1",          0, 0 },
  { "Eagle",                             "eagle",             1, 0 },
  { "Xekri",                             "xekri",             0, 0 },
};

/*
** Alternative skins can be specified in the CGI script or by options
** on the "http", "ui", and "server" commands.  The alternative skin
** name must be one of the aBuiltinSkin[].zLabel names.  If there is
** a match, that alternative is used.

Changes to src/sqlite3.c.

1
2
3
4
5
6
7
8
9
10
/******************************************************************************
** This file is an amalgamation of many separate C source files from SQLite
** version 3.8.8.3.  By combining all the individual C code files into this 
** single large file, the entire code can be compiled as a single translation
** unit.  This allows many compilers to do optimizations that would not be
** possible if the files were compiled separately.  Performance improvements
** of 5% or more are commonly seen when SQLite is compiled as a single
** translation unit.
**
** This file is all you need to compile SQLite.  To use SQLite in other


|







1
2
3
4
5
6
7
8
9
10
/******************************************************************************
** This file is an amalgamation of many separate C source files from SQLite
** version 3.8.8.  By combining all the individual C code files into this 
** single large file, the entire code can be compiled as a single translation
** unit.  This allows many compilers to do optimizations that would not be
** possible if the files were compiled separately.  Performance improvements
** of 5% or more are commonly seen when SQLite is compiled as a single
** translation unit.
**
** This file is all you need to compile SQLite.  To use SQLite in other
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
** string contains the date and time of the check-in (UTC) and an SHA1
** hash of the entire source tree.
**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.8.8.3"
#define SQLITE_VERSION_NUMBER 3008008
#define SQLITE_SOURCE_ID      "2015-02-25 13:29:11 9d6c1880fb75660bbabd693175579529785f8a6b"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version, sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros







|

|







274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
** string contains the date and time of the check-in (UTC) and an SHA1
** hash of the entire source tree.
**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.8.8"
#define SQLITE_VERSION_NUMBER 3008008
#define SQLITE_SOURCE_ID      "2015-02-25 14:25:31 6d132e7a224ee68b5cefe9222944aac5760ffc20"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version, sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143






1144
1145
1146
1147
1148
1149
1150
** opcode causes the xFileControl method to swap the file handle with the one
** pointed to by the pArg argument.  This capability is used during testing
** and only needs to be supported when SQLITE_TEST is defined.
**
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE               1
#define SQLITE_GET_LOCKPROXYFILE             2
#define SQLITE_SET_LOCKPROXYFILE             3
#define SQLITE_LAST_ERRNO                    4
#define SQLITE_FCNTL_SIZE_HINT               5
#define SQLITE_FCNTL_CHUNK_SIZE              6
#define SQLITE_FCNTL_FILE_POINTER            7
#define SQLITE_FCNTL_SYNC_OMITTED            8
#define SQLITE_FCNTL_WIN32_AV_RETRY          9
#define SQLITE_FCNTL_PERSIST_WAL            10
#define SQLITE_FCNTL_OVERWRITE              11
#define SQLITE_FCNTL_VFSNAME                12
#define SQLITE_FCNTL_POWERSAFE_OVERWRITE    13
#define SQLITE_FCNTL_PRAGMA                 14
#define SQLITE_FCNTL_BUSYHANDLER            15
#define SQLITE_FCNTL_TEMPFILENAME           16
#define SQLITE_FCNTL_MMAP_SIZE              18
#define SQLITE_FCNTL_TRACE                  19
#define SQLITE_FCNTL_HAS_MOVED              20
#define SQLITE_FCNTL_SYNC                   21
#define SQLITE_FCNTL_COMMIT_PHASETWO        22
#define SQLITE_FCNTL_WIN32_SET_HANDLE       23







/*
** CAPI3REF: Mutex Handle
**
** The mutex module within SQLite defines [sqlite3_mutex] to be an
** abstract type for a mutex object.  The SQLite core never looks
** at the internal representation of an [sqlite3_mutex].  It only







|
|
|


















>
>
>
>
>
>







1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
** opcode causes the xFileControl method to swap the file handle with the one
** pointed to by the pArg argument.  This capability is used during testing
** and only needs to be supported when SQLITE_TEST is defined.
**
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE               1
#define SQLITE_FCNTL_GET_LOCKPROXYFILE       2
#define SQLITE_FCNTL_SET_LOCKPROXYFILE       3
#define SQLITE_FCNTL_LAST_ERRNO              4
#define SQLITE_FCNTL_SIZE_HINT               5
#define SQLITE_FCNTL_CHUNK_SIZE              6
#define SQLITE_FCNTL_FILE_POINTER            7
#define SQLITE_FCNTL_SYNC_OMITTED            8
#define SQLITE_FCNTL_WIN32_AV_RETRY          9
#define SQLITE_FCNTL_PERSIST_WAL            10
#define SQLITE_FCNTL_OVERWRITE              11
#define SQLITE_FCNTL_VFSNAME                12
#define SQLITE_FCNTL_POWERSAFE_OVERWRITE    13
#define SQLITE_FCNTL_PRAGMA                 14
#define SQLITE_FCNTL_BUSYHANDLER            15
#define SQLITE_FCNTL_TEMPFILENAME           16
#define SQLITE_FCNTL_MMAP_SIZE              18
#define SQLITE_FCNTL_TRACE                  19
#define SQLITE_FCNTL_HAS_MOVED              20
#define SQLITE_FCNTL_SYNC                   21
#define SQLITE_FCNTL_COMMIT_PHASETWO        22
#define SQLITE_FCNTL_WIN32_SET_HANDLE       23

/* deprecated names */
#define SQLITE_GET_LOCKPROXYFILE      SQLITE_FCNTL_GET_LOCKPROXYFILE
#define SQLITE_SET_LOCKPROXYFILE      SQLITE_FCNTL_SET_LOCKPROXYFILE
#define SQLITE_LAST_ERRNO             SQLITE_FCNTL_LAST_ERRNO


/*
** CAPI3REF: Mutex Handle
**
** The mutex module within SQLite defines [sqlite3_mutex] to be an
** abstract type for a mutex object.  The SQLite core never looks
** at the internal representation of an [sqlite3_mutex].  It only
2398
2399
2400
2401
2402
2403
2404




2405
2406
2407
2408
2409
2410
2411
SQLITE_API void sqlite3_free_table(char **result);

/*
** CAPI3REF: Formatted String Printing Functions
**
** These routines are work-alikes of the "printf()" family of functions
** from the standard C library.




**
** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their
** results into memory obtained from [sqlite3_malloc()].
** The strings returned by these two routines should be
** released by [sqlite3_free()].  ^Both routines return a
** NULL pointer if [sqlite3_malloc()] is unable to allocate enough
** memory to hold the resulting string.







>
>
>
>







2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
SQLITE_API void sqlite3_free_table(char **result);

/*
** CAPI3REF: Formatted String Printing Functions
**
** These routines are work-alikes of the "printf()" family of functions
** from the standard C library.
** These routines understand most of the common K&R formatting options,
** plus some additional non-standard formats, detailed below.
** Note that some of the more obscure formatting options from recent
** C-library standards are omitted from this implementation.
**
** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their
** results into memory obtained from [sqlite3_malloc()].
** The strings returned by these two routines should be
** released by [sqlite3_free()].  ^Both routines return a
** NULL pointer if [sqlite3_malloc()] is unable to allocate enough
** memory to hold the resulting string.
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
** written will be n-1 characters.
**
** ^The sqlite3_vsnprintf() routine is a varargs version of sqlite3_snprintf().
**
** These routines all implement some additional formatting
** options that are useful for constructing SQL statements.
** All of the usual printf() formatting options apply.  In addition, there
** is are "%q", "%Q", and "%z" options.
**
** ^(The %q option works like %s in that it substitutes a nul-terminated
** string from the argument list.  But %q also doubles every '\'' character.
** %q is designed for use inside a string literal.)^  By doubling each '\''
** character it escapes that character and allows it to be inserted into
** the string.
**







|







2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
** written will be n-1 characters.
**
** ^The sqlite3_vsnprintf() routine is a varargs version of sqlite3_snprintf().
**
** These routines all implement some additional formatting
** options that are useful for constructing SQL statements.
** All of the usual printf() formatting options apply.  In addition, there
** is are "%q", "%Q", "%w" and "%z" options.
**
** ^(The %q option works like %s in that it substitutes a nul-terminated
** string from the argument list.  But %q also doubles every '\'' character.
** %q is designed for use inside a string literal.)^  By doubling each '\''
** character it escapes that character and allows it to be inserted into
** the string.
**
2482
2483
2484
2485
2486
2487
2488






2489
2490
2491
2492
2493
2494
2495
**  char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES(%Q)", zText);
**  sqlite3_exec(db, zSQL, 0, 0, 0);
**  sqlite3_free(zSQL);
** </pre></blockquote>
**
** The code above will render a correct SQL statement in the zSQL
** variable even if the zText variable is a NULL pointer.






**
** ^(The "%z" formatting option works like "%s" but with the
** addition that after the string has been read and copied into
** the result, [sqlite3_free()] is called on the input string.)^
*/
SQLITE_API char *sqlite3_mprintf(const char*,...);
SQLITE_API char *sqlite3_vmprintf(const char*, va_list);







>
>
>
>
>
>







2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
**  char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES(%Q)", zText);
**  sqlite3_exec(db, zSQL, 0, 0, 0);
**  sqlite3_free(zSQL);
** </pre></blockquote>
**
** The code above will render a correct SQL statement in the zSQL
** variable even if the zText variable is a NULL pointer.
**
** ^(The "%w" formatting option is like "%q" except that it expects to
** be contained within double-quotes instead of single quotes, and it
** escapes the double-quote character instead of the single-quote
** character.)^  The "%w" formatting option is intended for safely inserting
** table and column names into a constructed SQL statement.
**
** ^(The "%z" formatting option works like "%s" but with the
** addition that after the string has been read and copied into
** the result, [sqlite3_free()] is called on the input string.)^
*/
SQLITE_API char *sqlite3_mprintf(const char*,...);
SQLITE_API char *sqlite3_vmprintf(const char*, va_list);
5232
5233
5234
5235
5236
5237
5238





5239
5240
5241
5242
5243
5244
5245
**
** ^(This routine returns [SQLITE_OK] if shared cache was enabled or disabled
** successfully.  An [error code] is returned otherwise.)^
**
** ^Shared cache is disabled by default. But this might change in
** future releases of SQLite.  Applications that care about shared
** cache setting should set it explicitly.





**
** This interface is threadsafe on processors where writing a
** 32-bit integer is atomic.
**
** See Also:  [SQLite Shared-Cache Mode]
*/
SQLITE_API int sqlite3_enable_shared_cache(int);







>
>
>
>
>







5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
**
** ^(This routine returns [SQLITE_OK] if shared cache was enabled or disabled
** successfully.  An [error code] is returned otherwise.)^
**
** ^Shared cache is disabled by default. But this might change in
** future releases of SQLite.  Applications that care about shared
** cache setting should set it explicitly.
**
** Note: This method is disabled on MacOS X 10.7 and iOS version 5.0
** and will always return SQLITE_MISUSE. On those systems, 
** shared cache mode should be enabled per-database connection via 
** [sqlite3_open_v2()] with [SQLITE_OPEN_SHAREDCACHE].
**
** This interface is threadsafe on processors where writing a
** 32-bit integer is atomic.
**
** See Also:  [SQLite Shared-Cache Mode]
*/
SQLITE_API int sqlite3_enable_shared_cache(int);
6432
6433
6434
6435
6436
6437
6438

6439
6440
6441
6442
6443
6444
6445
6446
#define SQLITE_TESTCTRL_LOCALTIME_FAULT         18
#define SQLITE_TESTCTRL_EXPLAIN_STMT            19  /* NOT USED */
#define SQLITE_TESTCTRL_NEVER_CORRUPT           20
#define SQLITE_TESTCTRL_VDBE_COVERAGE           21
#define SQLITE_TESTCTRL_BYTEORDER               22
#define SQLITE_TESTCTRL_ISINIT                  23
#define SQLITE_TESTCTRL_SORTER_MMAP             24

#define SQLITE_TESTCTRL_LAST                    24

/*
** CAPI3REF: SQLite Runtime Status
**
** ^This interface is used to retrieve runtime status information
** about the performance of SQLite, and optionally to reset various
** highwater marks.  ^The first argument is an integer code for







>
|







6453
6454
6455
6456
6457
6458
6459
6460
6461
6462
6463
6464
6465
6466
6467
6468
#define SQLITE_TESTCTRL_LOCALTIME_FAULT         18
#define SQLITE_TESTCTRL_EXPLAIN_STMT            19  /* NOT USED */
#define SQLITE_TESTCTRL_NEVER_CORRUPT           20
#define SQLITE_TESTCTRL_VDBE_COVERAGE           21
#define SQLITE_TESTCTRL_BYTEORDER               22
#define SQLITE_TESTCTRL_ISINIT                  23
#define SQLITE_TESTCTRL_SORTER_MMAP             24
#define SQLITE_TESTCTRL_IMPOSTER                25
#define SQLITE_TESTCTRL_LAST                    25

/*
** CAPI3REF: SQLite Runtime Status
**
** ^This interface is used to retrieve runtime status information
** about the performance of SQLite, and optionally to reset various
** highwater marks.  ^The first argument is an integer code for
9217
9218
9219
9220
9221
9222
9223
9224
9225
9226
9227
9228
9229
9230
9231
9232
9233
9234
SQLITE_PRIVATE int sqlite3BtreeSetPagerFlags(Btree*,unsigned);
SQLITE_PRIVATE int sqlite3BtreeSyncDisabled(Btree*);
SQLITE_PRIVATE int sqlite3BtreeSetPageSize(Btree *p, int nPagesize, int nReserve, int eFix);
SQLITE_PRIVATE int sqlite3BtreeGetPageSize(Btree*);
SQLITE_PRIVATE int sqlite3BtreeMaxPageCount(Btree*,int);
SQLITE_PRIVATE u32 sqlite3BtreeLastPage(Btree*);
SQLITE_PRIVATE int sqlite3BtreeSecureDelete(Btree*,int);
SQLITE_PRIVATE int sqlite3BtreeGetReserve(Btree*);
#if defined(SQLITE_HAS_CODEC) || defined(SQLITE_DEBUG)
SQLITE_PRIVATE int sqlite3BtreeGetReserveNoMutex(Btree *p);
#endif
SQLITE_PRIVATE int sqlite3BtreeSetAutoVacuum(Btree *, int);
SQLITE_PRIVATE int sqlite3BtreeGetAutoVacuum(Btree *);
SQLITE_PRIVATE int sqlite3BtreeBeginTrans(Btree*,int);
SQLITE_PRIVATE int sqlite3BtreeCommitPhaseOne(Btree*, const char *zMaster);
SQLITE_PRIVATE int sqlite3BtreeCommitPhaseTwo(Btree*, int);
SQLITE_PRIVATE int sqlite3BtreeCommit(Btree*);
SQLITE_PRIVATE int sqlite3BtreeRollback(Btree*,int,int);







|
<

<







9239
9240
9241
9242
9243
9244
9245
9246

9247

9248
9249
9250
9251
9252
9253
9254
SQLITE_PRIVATE int sqlite3BtreeSetPagerFlags(Btree*,unsigned);
SQLITE_PRIVATE int sqlite3BtreeSyncDisabled(Btree*);
SQLITE_PRIVATE int sqlite3BtreeSetPageSize(Btree *p, int nPagesize, int nReserve, int eFix);
SQLITE_PRIVATE int sqlite3BtreeGetPageSize(Btree*);
SQLITE_PRIVATE int sqlite3BtreeMaxPageCount(Btree*,int);
SQLITE_PRIVATE u32 sqlite3BtreeLastPage(Btree*);
SQLITE_PRIVATE int sqlite3BtreeSecureDelete(Btree*,int);
SQLITE_PRIVATE int sqlite3BtreeGetOptimalReserve(Btree*);

SQLITE_PRIVATE int sqlite3BtreeGetReserveNoMutex(Btree *p);

SQLITE_PRIVATE int sqlite3BtreeSetAutoVacuum(Btree *, int);
SQLITE_PRIVATE int sqlite3BtreeGetAutoVacuum(Btree *);
SQLITE_PRIVATE int sqlite3BtreeBeginTrans(Btree*,int);
SQLITE_PRIVATE int sqlite3BtreeCommitPhaseOne(Btree*, const char *zMaster);
SQLITE_PRIVATE int sqlite3BtreeCommitPhaseTwo(Btree*, int);
SQLITE_PRIVATE int sqlite3BtreeCommit(Btree*);
SQLITE_PRIVATE int sqlite3BtreeRollback(Btree*,int,int);
10837
10838
10839
10840
10841
10842
10843

10844
10845
10846
10847
10848
10849
10850
  int aLimit[SQLITE_N_LIMIT];   /* Limits */
  int nMaxSorterMmap;           /* Maximum size of regions mapped by sorter */
  struct sqlite3InitInfo {      /* Information used during initialization */
    int newTnum;                /* Rootpage of table being initialized */
    u8 iDb;                     /* Which db file is being initialized */
    u8 busy;                    /* TRUE if currently initializing */
    u8 orphanTrigger;           /* Last statement is orphaned TEMP trigger */

  } init;
  int nVdbeActive;              /* Number of VDBEs currently running */
  int nVdbeRead;                /* Number of active VDBEs that read or write */
  int nVdbeWrite;               /* Number of active VDBEs that read and write */
  int nVdbeExec;                /* Number of nested calls to VdbeExec() */
  int nExtension;               /* Number of loaded extensions */
  void **aExtension;            /* Array of shared library handles */







>







10857
10858
10859
10860
10861
10862
10863
10864
10865
10866
10867
10868
10869
10870
10871
  int aLimit[SQLITE_N_LIMIT];   /* Limits */
  int nMaxSorterMmap;           /* Maximum size of regions mapped by sorter */
  struct sqlite3InitInfo {      /* Information used during initialization */
    int newTnum;                /* Rootpage of table being initialized */
    u8 iDb;                     /* Which db file is being initialized */
    u8 busy;                    /* TRUE if currently initializing */
    u8 orphanTrigger;           /* Last statement is orphaned TEMP trigger */
    u8 imposterTable;           /* Building an imposter table */
  } init;
  int nVdbeActive;              /* Number of VDBEs currently running */
  int nVdbeRead;                /* Number of active VDBEs that read or write */
  int nVdbeWrite;               /* Number of active VDBEs that read and write */
  int nVdbeExec;                /* Number of nested calls to VdbeExec() */
  int nExtension;               /* Number of loaded extensions */
  void **aExtension;            /* Array of shared library handles */
11793
11794
11795
11796
11797
11798
11799
11800
11801






11802
11803
11804
11805
11806
11807
11808
#define EP_Skip      0x001000 /* COLLATE, AS, or UNLIKELY */
#define EP_Reduced   0x002000 /* Expr struct EXPR_REDUCEDSIZE bytes only */
#define EP_TokenOnly 0x004000 /* Expr struct EXPR_TOKENONLYSIZE bytes only */
#define EP_Static    0x008000 /* Held in memory not obtained from malloc() */
#define EP_MemToken  0x010000 /* Need to sqlite3DbFree() Expr.zToken */
#define EP_NoReduce  0x020000 /* Cannot EXPRDUP_REDUCE this Expr */
#define EP_Unlikely  0x040000 /* unlikely() or likelihood() function */
#define EP_Constant  0x080000 /* Node is a constant */
#define EP_CanBeNull 0x100000 /* Can be null despite NOT NULL constraint */







/*
** These macros can be used to test, set, or clear bits in the 
** Expr.flags field.
*/
#define ExprHasProperty(E,P)     (((E)->flags&(P))!=0)
#define ExprHasAllProperty(E,P)  (((E)->flags&(P))==(P))







|

>
>
>
>
>
>







11814
11815
11816
11817
11818
11819
11820
11821
11822
11823
11824
11825
11826
11827
11828
11829
11830
11831
11832
11833
11834
11835
#define EP_Skip      0x001000 /* COLLATE, AS, or UNLIKELY */
#define EP_Reduced   0x002000 /* Expr struct EXPR_REDUCEDSIZE bytes only */
#define EP_TokenOnly 0x004000 /* Expr struct EXPR_TOKENONLYSIZE bytes only */
#define EP_Static    0x008000 /* Held in memory not obtained from malloc() */
#define EP_MemToken  0x010000 /* Need to sqlite3DbFree() Expr.zToken */
#define EP_NoReduce  0x020000 /* Cannot EXPRDUP_REDUCE this Expr */
#define EP_Unlikely  0x040000 /* unlikely() or likelihood() function */
#define EP_ConstFunc 0x080000 /* Node is a SQLITE_FUNC_CONSTANT function */
#define EP_CanBeNull 0x100000 /* Can be null despite NOT NULL constraint */
#define EP_Subquery  0x200000 /* Tree contains a TK_SELECT operator */

/*
** Combinations of two or more EP_* flags
*/
#define EP_Propagate (EP_Collate|EP_Subquery) /* Propagate these bits up tree */

/*
** These macros can be used to test, set, or clear bits in the 
** Expr.flags field.
*/
#define ExprHasProperty(E,P)     (((E)->flags&(P))!=0)
#define ExprHasAllProperty(E,P)  (((E)->flags&(P))==(P))
12902
12903
12904
12905
12906
12907
12908

12909
12910
12911
12912
12913
12914
12915
SQLITE_PRIVATE Expr *sqlite3ExprFunction(Parse*,ExprList*, Token*);
SQLITE_PRIVATE void sqlite3ExprAssignVarNumber(Parse*, Expr*);
SQLITE_PRIVATE void sqlite3ExprDelete(sqlite3*, Expr*);
SQLITE_PRIVATE ExprList *sqlite3ExprListAppend(Parse*,ExprList*,Expr*);
SQLITE_PRIVATE void sqlite3ExprListSetName(Parse*,ExprList*,Token*,int);
SQLITE_PRIVATE void sqlite3ExprListSetSpan(Parse*,ExprList*,ExprSpan*);
SQLITE_PRIVATE void sqlite3ExprListDelete(sqlite3*, ExprList*);

SQLITE_PRIVATE int sqlite3Init(sqlite3*, char**);
SQLITE_PRIVATE int sqlite3InitCallback(void*, int, char**, char**);
SQLITE_PRIVATE void sqlite3Pragma(Parse*,Token*,Token*,Token*,int);
SQLITE_PRIVATE void sqlite3ResetAllSchemasOfConnection(sqlite3*);
SQLITE_PRIVATE void sqlite3ResetOneSchema(sqlite3*,int);
SQLITE_PRIVATE void sqlite3CollapseDatabaseArray(sqlite3*);
SQLITE_PRIVATE void sqlite3BeginParse(Parse*,int);







>







12929
12930
12931
12932
12933
12934
12935
12936
12937
12938
12939
12940
12941
12942
12943
SQLITE_PRIVATE Expr *sqlite3ExprFunction(Parse*,ExprList*, Token*);
SQLITE_PRIVATE void sqlite3ExprAssignVarNumber(Parse*, Expr*);
SQLITE_PRIVATE void sqlite3ExprDelete(sqlite3*, Expr*);
SQLITE_PRIVATE ExprList *sqlite3ExprListAppend(Parse*,ExprList*,Expr*);
SQLITE_PRIVATE void sqlite3ExprListSetName(Parse*,ExprList*,Token*,int);
SQLITE_PRIVATE void sqlite3ExprListSetSpan(Parse*,ExprList*,ExprSpan*);
SQLITE_PRIVATE void sqlite3ExprListDelete(sqlite3*, ExprList*);
SQLITE_PRIVATE u32 sqlite3ExprListFlags(const ExprList*);
SQLITE_PRIVATE int sqlite3Init(sqlite3*, char**);
SQLITE_PRIVATE int sqlite3InitCallback(void*, int, char**, char**);
SQLITE_PRIVATE void sqlite3Pragma(Parse*,Token*,Token*,Token*,int);
SQLITE_PRIVATE void sqlite3ResetAllSchemasOfConnection(sqlite3*);
SQLITE_PRIVATE void sqlite3ResetOneSchema(sqlite3*,int);
SQLITE_PRIVATE void sqlite3CollapseDatabaseArray(sqlite3*);
SQLITE_PRIVATE void sqlite3BeginParse(Parse*,int);
13485
13486
13487
13488
13489
13490
13491

13492
13493
13494
13495
13496
13497
13498
13499
13500
13501
13502
13503
13504
  #define sqlite3JournalExists(p) 1
#endif

SQLITE_PRIVATE void sqlite3MemJournalOpen(sqlite3_file *);
SQLITE_PRIVATE int sqlite3MemJournalSize(void);
SQLITE_PRIVATE int sqlite3IsMemJournal(sqlite3_file *);


#if SQLITE_MAX_EXPR_DEPTH>0
SQLITE_PRIVATE   void sqlite3ExprSetHeight(Parse *pParse, Expr *p);
SQLITE_PRIVATE   int sqlite3SelectExprHeight(Select *);
SQLITE_PRIVATE   int sqlite3ExprCheckHeight(Parse*, int);
#else
  #define sqlite3ExprSetHeight(x,y)
  #define sqlite3SelectExprHeight(x) 0
  #define sqlite3ExprCheckHeight(x,y)
#endif

SQLITE_PRIVATE u32 sqlite3Get4byte(const u8*);
SQLITE_PRIVATE void sqlite3Put4byte(u8*, u32);








>

<



<







13513
13514
13515
13516
13517
13518
13519
13520
13521

13522
13523
13524

13525
13526
13527
13528
13529
13530
13531
  #define sqlite3JournalExists(p) 1
#endif

SQLITE_PRIVATE void sqlite3MemJournalOpen(sqlite3_file *);
SQLITE_PRIVATE int sqlite3MemJournalSize(void);
SQLITE_PRIVATE int sqlite3IsMemJournal(sqlite3_file *);

SQLITE_PRIVATE void sqlite3ExprSetHeightAndFlags(Parse *pParse, Expr *p);
#if SQLITE_MAX_EXPR_DEPTH>0

SQLITE_PRIVATE   int sqlite3SelectExprHeight(Select *);
SQLITE_PRIVATE   int sqlite3ExprCheckHeight(Parse*, int);
#else

  #define sqlite3SelectExprHeight(x) 0
  #define sqlite3ExprCheckHeight(x,y)
#endif

SQLITE_PRIVATE u32 sqlite3Get4byte(const u8*);
SQLITE_PRIVATE void sqlite3Put4byte(u8*, u32);

13627
13628
13629
13630
13631
13632
13633
13634
13635
13636
13637
13638
13639
13640
13641
13642
13643
13644
13645
13646
13647
13648
13649
13650
#ifdef SQLITE_EBCDIC
      0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, /* 0x */
     16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, /* 1x */
     32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, /* 2x */
     48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, /* 3x */
     64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, /* 4x */
     80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, /* 5x */
     96, 97, 66, 67, 68, 69, 70, 71, 72, 73,106,107,108,109,110,111, /* 6x */
    112, 81, 82, 83, 84, 85, 86, 87, 88, 89,122,123,124,125,126,127, /* 7x */
    128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143, /* 8x */
    144,145,146,147,148,149,150,151,152,153,154,155,156,157,156,159, /* 9x */
    160,161,162,163,164,165,166,167,168,169,170,171,140,141,142,175, /* Ax */
    176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191, /* Bx */
    192,129,130,131,132,133,134,135,136,137,202,203,204,205,206,207, /* Cx */
    208,145,146,147,148,149,150,151,152,153,218,219,220,221,222,223, /* Dx */
    224,225,162,163,164,165,166,167,168,169,232,203,204,205,206,207, /* Ex */
    239,240,241,242,243,244,245,246,247,248,249,219,220,221,222,255, /* Fx */
#endif
};

/*
** The following 256 byte lookup table is used to support SQLites built-in
** equivalents to the following standard library functions:
**







|
|

|




|
|







13654
13655
13656
13657
13658
13659
13660
13661
13662
13663
13664
13665
13666
13667
13668
13669
13670
13671
13672
13673
13674
13675
13676
13677
#ifdef SQLITE_EBCDIC
      0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, /* 0x */
     16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, /* 1x */
     32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, /* 2x */
     48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, /* 3x */
     64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, /* 4x */
     80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, /* 5x */
     96, 97, 98, 99,100,101,102,103,104,105,106,107,108,109,110,111, /* 6x */
    112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127, /* 7x */
    128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143, /* 8x */
    144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159, /* 9x */
    160,161,162,163,164,165,166,167,168,169,170,171,140,141,142,175, /* Ax */
    176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191, /* Bx */
    192,129,130,131,132,133,134,135,136,137,202,203,204,205,206,207, /* Cx */
    208,145,146,147,148,149,150,151,152,153,218,219,220,221,222,223, /* Dx */
    224,225,162,163,164,165,166,167,168,169,234,235,236,237,238,239, /* Ex */
    240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255, /* Fx */
#endif
};

/*
** The following 256 byte lookup table is used to support SQLites built-in
** equivalents to the following standard library functions:
**
14635
14636
14637
14638
14639
14640
14641



14642
14643
14644
14645
14646
14647
14648
  Mem *aVar;              /* Values for the OP_Variable opcode. */
  char **azVar;           /* Name of variables */
  ynVar nVar;             /* Number of entries in aVar[] */
  ynVar nzVar;            /* Number of entries in azVar[] */
  u32 cacheCtr;           /* VdbeCursor row cache generation counter */
  int pc;                 /* The program counter */
  int rc;                 /* Value to return */



  u16 nResColumn;         /* Number of columns in one row of the result set */
  u8 errorAction;         /* Recovery action to do in case of an error */
  u8 minWriteFileFormat;  /* Minimum file format for writable database files */
  bft explain:2;          /* True if EXPLAIN present on SQL command */
  bft inVtabMethod:2;     /* See comments above */
  bft changeCntOn:1;      /* True to update the change-counter */
  bft expired:1;          /* True if the VM needs to be recompiled */







>
>
>







14662
14663
14664
14665
14666
14667
14668
14669
14670
14671
14672
14673
14674
14675
14676
14677
14678
  Mem *aVar;              /* Values for the OP_Variable opcode. */
  char **azVar;           /* Name of variables */
  ynVar nVar;             /* Number of entries in aVar[] */
  ynVar nzVar;            /* Number of entries in azVar[] */
  u32 cacheCtr;           /* VdbeCursor row cache generation counter */
  int pc;                 /* The program counter */
  int rc;                 /* Value to return */
#ifdef SQLITE_DEBUG
  int rcApp;              /* errcode set by sqlite3_result_error_code() */
#endif
  u16 nResColumn;         /* Number of columns in one row of the result set */
  u8 errorAction;         /* Recovery action to do in case of an error */
  u8 minWriteFileFormat;  /* Minimum file format for writable database files */
  bft explain:2;          /* True if EXPLAIN present on SQL command */
  bft inVtabMethod:2;     /* See comments above */
  bft changeCntOn:1;      /* True to update the change-counter */
  bft expired:1;          /* True if the VM needs to be recompiled */
19109
19110
19111
19112
19113
19114
19115

19116
19117




19118
19119
19120
19121
19122
19123
19124
19125
19126
19127
19128
19129
19130
19131
19132
19133





19134
19135
19136
19137
19138
19139
19140
      if( pNew ){
        pNew->id = id;
        pNew->cnt = 0;
      }
      break;
    }
    default: {

      assert( id-2 >= 0 );
      assert( id-2 < (int)(sizeof(aStatic)/sizeof(aStatic[0])) );




      pNew = &aStatic[id-2];
      pNew->id = id;
      break;
    }
  }
  return (sqlite3_mutex*)pNew;
}

/*
** This routine deallocates a previously allocated mutex.
*/
static void debugMutexFree(sqlite3_mutex *pX){
  sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX;
  assert( p->cnt==0 );
  assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE );
  sqlite3_free(p);





}

/*
** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt
** to enter a mutex.  If another thread is already within the mutex,
** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return
** SQLITE_BUSY.  The sqlite3_mutex_try() interface returns SQLITE_OK







>
|
<
>
>
>
>














|
|
>
>
>
>
>







19139
19140
19141
19142
19143
19144
19145
19146
19147

19148
19149
19150
19151
19152
19153
19154
19155
19156
19157
19158
19159
19160
19161
19162
19163
19164
19165
19166
19167
19168
19169
19170
19171
19172
19173
19174
19175
19176
19177
19178
19179
      if( pNew ){
        pNew->id = id;
        pNew->cnt = 0;
      }
      break;
    }
    default: {
#ifdef SQLITE_ENABLE_API_ARMOR
      if( id-2<0 || id-2>=ArraySize(aStatic) ){

        (void)SQLITE_MISUSE_BKPT;
        return 0;
      }
#endif
      pNew = &aStatic[id-2];
      pNew->id = id;
      break;
    }
  }
  return (sqlite3_mutex*)pNew;
}

/*
** This routine deallocates a previously allocated mutex.
*/
static void debugMutexFree(sqlite3_mutex *pX){
  sqlite3_debug_mutex *p = (sqlite3_debug_mutex*)pX;
  assert( p->cnt==0 );
  if( p->id==SQLITE_MUTEX_RECURSIVE || p->id==SQLITE_MUTEX_FAST ){
    sqlite3_free(p);
  }else{
#ifdef SQLITE_ENABLE_API_ARMOR
    (void)SQLITE_MISUSE_BKPT;
#endif
  }
}

/*
** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt
** to enter a mutex.  If another thread is already within the mutex,
** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return
** SQLITE_BUSY.  The sqlite3_mutex_try() interface returns SQLITE_OK
19237
19238
19239
19240
19241
19242
19243
19244
19245


19246
19247
19248
19249
19250
19251
19252
#endif

/*
** Each recursive mutex is an instance of the following structure.
*/
struct sqlite3_mutex {
  pthread_mutex_t mutex;     /* Mutex controlling the lock */
#if SQLITE_MUTEX_NREF
  int id;                    /* Mutex type */


  volatile int nRef;         /* Number of entrances */
  volatile pthread_t owner;  /* Thread that is within this mutex */
  int trace;                 /* True to trace changes */
#endif
};
#if SQLITE_MUTEX_NREF
#define SQLITE3_MUTEX_INITIALIZER { PTHREAD_MUTEX_INITIALIZER, 0, 0, (pthread_t)0, 0 }







|

>
>







19276
19277
19278
19279
19280
19281
19282
19283
19284
19285
19286
19287
19288
19289
19290
19291
19292
19293
#endif

/*
** Each recursive mutex is an instance of the following structure.
*/
struct sqlite3_mutex {
  pthread_mutex_t mutex;     /* Mutex controlling the lock */
#if SQLITE_MUTEX_NREF || defined(SQLITE_ENABLE_API_ARMOR)
  int id;                    /* Mutex type */
#endif
#if SQLITE_MUTEX_NREF
  volatile int nRef;         /* Number of entrances */
  volatile pthread_t owner;  /* Thread that is within this mutex */
  int trace;                 /* True to trace changes */
#endif
};
#if SQLITE_MUTEX_NREF
#define SQLITE3_MUTEX_INITIALIZER { PTHREAD_MUTEX_INITIALIZER, 0, 0, (pthread_t)0, 0 }
19355
19356
19357
19358
19359
19360
19361
19362
19363
19364
19365
19366
19367
19368
19369
19370
19371
19372
19373
19374
19375
19376
19377
19378
19379
19380
19381
19382
19383
19384
19385
19386
19387
19388
19389
19390
19391



19392
19393
19394
19395
19396
19397
19398
19399
19400
19401
19402

19403


19404
19405






19406
19407
19408
19409
19410
19411
19412
        /* Use a recursive mutex if it is available */
        pthread_mutexattr_t recursiveAttr;
        pthread_mutexattr_init(&recursiveAttr);
        pthread_mutexattr_settype(&recursiveAttr, PTHREAD_MUTEX_RECURSIVE);
        pthread_mutex_init(&p->mutex, &recursiveAttr);
        pthread_mutexattr_destroy(&recursiveAttr);
#endif
#if SQLITE_MUTEX_NREF
        p->id = iType;
#endif
      }
      break;
    }
    case SQLITE_MUTEX_FAST: {
      p = sqlite3MallocZero( sizeof(*p) );
      if( p ){
#if SQLITE_MUTEX_NREF
        p->id = iType;
#endif
        pthread_mutex_init(&p->mutex, 0);
      }
      break;
    }
    default: {
#ifdef SQLITE_ENABLE_API_ARMOR
      if( iType-2<0 || iType-2>=ArraySize(staticMutexes) ){
        (void)SQLITE_MISUSE_BKPT;
        return 0;
      }
#endif
      p = &staticMutexes[iType-2];
#if SQLITE_MUTEX_NREF
      p->id = iType;
#endif
      break;
    }
  }



  return p;
}


/*
** This routine deallocates a previously
** allocated mutex.  SQLite is careful to deallocate every
** mutex that it allocates.
*/
static void pthreadMutexFree(sqlite3_mutex *p){
  assert( p->nRef==0 );

  assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE );


  pthread_mutex_destroy(&p->mutex);
  sqlite3_free(p);






}

/*
** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt
** to enter a mutex.  If another thread is already within the mutex,
** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return
** SQLITE_BUSY.  The sqlite3_mutex_try() interface returns SQLITE_OK







<
<
<






<
<
<












<
<
<



>
>
>











>
|
>
>
|
|
>
>
>
>
>
>







19396
19397
19398
19399
19400
19401
19402



19403
19404
19405
19406
19407
19408



19409
19410
19411
19412
19413
19414
19415
19416
19417
19418
19419
19420



19421
19422
19423
19424
19425
19426
19427
19428
19429
19430
19431
19432
19433
19434
19435
19436
19437
19438
19439
19440
19441
19442
19443
19444
19445
19446
19447
19448
19449
19450
19451
19452
19453
19454
19455
19456
        /* Use a recursive mutex if it is available */
        pthread_mutexattr_t recursiveAttr;
        pthread_mutexattr_init(&recursiveAttr);
        pthread_mutexattr_settype(&recursiveAttr, PTHREAD_MUTEX_RECURSIVE);
        pthread_mutex_init(&p->mutex, &recursiveAttr);
        pthread_mutexattr_destroy(&recursiveAttr);
#endif



      }
      break;
    }
    case SQLITE_MUTEX_FAST: {
      p = sqlite3MallocZero( sizeof(*p) );
      if( p ){



        pthread_mutex_init(&p->mutex, 0);
      }
      break;
    }
    default: {
#ifdef SQLITE_ENABLE_API_ARMOR
      if( iType-2<0 || iType-2>=ArraySize(staticMutexes) ){
        (void)SQLITE_MISUSE_BKPT;
        return 0;
      }
#endif
      p = &staticMutexes[iType-2];



      break;
    }
  }
#if SQLITE_MUTEX_NREF || defined(SQLITE_ENABLE_API_ARMOR)
  if( p ) p->id = iType;
#endif
  return p;
}


/*
** This routine deallocates a previously
** allocated mutex.  SQLite is careful to deallocate every
** mutex that it allocates.
*/
static void pthreadMutexFree(sqlite3_mutex *p){
  assert( p->nRef==0 );
#if SQLITE_ENABLE_API_ARMOR
  if( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE )
#endif
  {
    pthread_mutex_destroy(&p->mutex);
    sqlite3_free(p);
  }
#ifdef SQLITE_ENABLE_API_ARMOR
  else{
    (void)SQLITE_MISUSE_BKPT;
  }
#endif
}

/*
** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt
** to enter a mutex.  If another thread is already within the mutex,
** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return
** SQLITE_BUSY.  The sqlite3_mutex_try() interface returns SQLITE_OK
19868
19869
19870
19871
19872
19873
19874











19875
19876
19877
19878
19879
19880
19881
*/
#if SQLITE_OS_WINCE
# define SQLITE_WIN32_VOLATILE
#else
# define SQLITE_WIN32_VOLATILE volatile
#endif












#endif /* _OS_WIN_H_ */

/************** End of os_win.h **********************************************/
/************** Continuing where we left off in mutex_w32.c ******************/
#endif

/*







>
>
>
>
>
>
>
>
>
>
>







19912
19913
19914
19915
19916
19917
19918
19919
19920
19921
19922
19923
19924
19925
19926
19927
19928
19929
19930
19931
19932
19933
19934
19935
19936
*/
#if SQLITE_OS_WINCE
# define SQLITE_WIN32_VOLATILE
#else
# define SQLITE_WIN32_VOLATILE volatile
#endif

/*
** For some Windows sub-platforms, the _beginthreadex() / _endthreadex()
** functions are not available (e.g. those not using MSVC, Cygwin, etc).
*/
#if SQLITE_OS_WIN && !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && \
    SQLITE_THREADSAFE>0 && !defined(__CYGWIN__)
# define SQLITE_OS_WIN_THREADS 1
#else
# define SQLITE_OS_WIN_THREADS 0
#endif

#endif /* _OS_WIN_H_ */

/************** End of os_win.h **********************************************/
/************** Continuing where we left off in mutex_w32.c ******************/
#endif

/*
20043
20044
20045
20046
20047
20048
20049
20050
20051

20052
20053
20054
20055
20056
20057
20058
20059
20060
20061
20062
20063
20064
20065
20066
20067
20068
20069
20070
20071
20072
20073
20074
20075
20076

20077
20078
20079
20080
20081
20082
20083
20084
20085
20086
20087
20088
20089
20090
20091
20092
20093
20094
20095
20096
20097
20098
20099
20100
20101





20102
20103
20104
20105
20106
20107
20108
  sqlite3_mutex *p;

  switch( iType ){
    case SQLITE_MUTEX_FAST:
    case SQLITE_MUTEX_RECURSIVE: {
      p = sqlite3MallocZero( sizeof(*p) );
      if( p ){
#ifdef SQLITE_DEBUG
        p->id = iType;

#ifdef SQLITE_WIN32_MUTEX_TRACE_DYNAMIC
        p->trace = 1;
#endif
#endif
#if SQLITE_OS_WINRT
        InitializeCriticalSectionEx(&p->mutex, 0, 0);
#else
        InitializeCriticalSection(&p->mutex);
#endif
      }
      break;
    }
    default: {
#ifdef SQLITE_ENABLE_API_ARMOR
      if( iType-2<0 || iType-2>=ArraySize(winMutex_staticMutexes) ){
        (void)SQLITE_MISUSE_BKPT;
        return 0;
      }
#endif
      assert( iType-2 >= 0 );
      assert( iType-2 < ArraySize(winMutex_staticMutexes) );
      assert( winMutex_isInit==1 );
      p = &winMutex_staticMutexes[iType-2];
#ifdef SQLITE_DEBUG
      p->id = iType;

#ifdef SQLITE_WIN32_MUTEX_TRACE_STATIC
      p->trace = 1;
#endif
#endif
      break;
    }
  }
  return p;
}


/*
** This routine deallocates a previously
** allocated mutex.  SQLite is careful to deallocate every
** mutex that it allocates.
*/
static void winMutexFree(sqlite3_mutex *p){
  assert( p );
#ifdef SQLITE_DEBUG
  assert( p->nRef==0 && p->owner==0 );
  assert( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE );
#endif
  assert( winMutex_isInit==1 );
  DeleteCriticalSection(&p->mutex);
  sqlite3_free(p);





}

/*
** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt
** to enter a mutex.  If another thread is already within the mutex,
** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return
** SQLITE_BUSY.  The sqlite3_mutex_try() interface returns SQLITE_OK







<

>



















<
<
<

<

>


















<

|
<
<
|
|
>
>
>
>
>







20098
20099
20100
20101
20102
20103
20104

20105
20106
20107
20108
20109
20110
20111
20112
20113
20114
20115
20116
20117
20118
20119
20120
20121
20122
20123
20124
20125



20126

20127
20128
20129
20130
20131
20132
20133
20134
20135
20136
20137
20138
20139
20140
20141
20142
20143
20144
20145
20146

20147
20148


20149
20150
20151
20152
20153
20154
20155
20156
20157
20158
20159
20160
20161
20162
  sqlite3_mutex *p;

  switch( iType ){
    case SQLITE_MUTEX_FAST:
    case SQLITE_MUTEX_RECURSIVE: {
      p = sqlite3MallocZero( sizeof(*p) );
      if( p ){

        p->id = iType;
#ifdef SQLITE_DEBUG
#ifdef SQLITE_WIN32_MUTEX_TRACE_DYNAMIC
        p->trace = 1;
#endif
#endif
#if SQLITE_OS_WINRT
        InitializeCriticalSectionEx(&p->mutex, 0, 0);
#else
        InitializeCriticalSection(&p->mutex);
#endif
      }
      break;
    }
    default: {
#ifdef SQLITE_ENABLE_API_ARMOR
      if( iType-2<0 || iType-2>=ArraySize(winMutex_staticMutexes) ){
        (void)SQLITE_MISUSE_BKPT;
        return 0;
      }
#endif



      p = &winMutex_staticMutexes[iType-2];

      p->id = iType;
#ifdef SQLITE_DEBUG
#ifdef SQLITE_WIN32_MUTEX_TRACE_STATIC
      p->trace = 1;
#endif
#endif
      break;
    }
  }
  return p;
}


/*
** This routine deallocates a previously
** allocated mutex.  SQLite is careful to deallocate every
** mutex that it allocates.
*/
static void winMutexFree(sqlite3_mutex *p){
  assert( p );

  assert( p->nRef==0 && p->owner==0 );
  if( p->id==SQLITE_MUTEX_FAST || p->id==SQLITE_MUTEX_RECURSIVE ){


    DeleteCriticalSection(&p->mutex);
    sqlite3_free(p);
  }else{
#ifdef SQLITE_ENABLE_API_ARMOR
    (void)SQLITE_MISUSE_BKPT;
#endif
  }
}

/*
** The sqlite3_mutex_enter() and sqlite3_mutex_try() routines attempt
** to enter a mutex.  If another thread is already within the mutex,
** sqlite3_mutex_enter() will block and sqlite3_mutex_try() will return
** SQLITE_BUSY.  The sqlite3_mutex_try() interface returns SQLITE_OK
21258
21259
21260
21261
21262
21263
21264
21265
21266
21267
21268
21269
21270
21271
21272
21273
21274
21275
21276
21277
21278
  double rounder;            /* Used for rounding floating point values */
  etByte flag_dp;            /* True if decimal point should be shown */
  etByte flag_rtz;           /* True if trailing zeros should be removed */
#endif
  PrintfArguments *pArgList = 0; /* Arguments for SQLITE_PRINTF_SQLFUNC */
  char buf[etBUFSIZE];       /* Conversion buffer */

#ifdef SQLITE_ENABLE_API_ARMOR
  if( ap==0 ){
    (void)SQLITE_MISUSE_BKPT;
    sqlite3StrAccumReset(pAccum);
    return;
  }
#endif
  bufpt = 0;
  if( bFlags ){
    if( (bArgList = (bFlags & SQLITE_PRINTF_SQLFUNC))!=0 ){
      pArgList = va_arg(ap, PrintfArguments*);
    }
    useIntern = bFlags & SQLITE_PRINTF_INTERNAL;
  }else{







<
<
<
<
<
<
<







21312
21313
21314
21315
21316
21317
21318







21319
21320
21321
21322
21323
21324
21325
  double rounder;            /* Used for rounding floating point values */
  etByte flag_dp;            /* True if decimal point should be shown */
  etByte flag_rtz;           /* True if trailing zeros should be removed */
#endif
  PrintfArguments *pArgList = 0; /* Arguments for SQLITE_PRINTF_SQLFUNC */
  char buf[etBUFSIZE];       /* Conversion buffer */








  bufpt = 0;
  if( bFlags ){
    if( (bArgList = (bFlags & SQLITE_PRINTF_SQLFUNC))!=0 ){
      pArgList = va_arg(ap, PrintfArguments*);
    }
    useIntern = bFlags & SQLITE_PRINTF_INTERNAL;
  }else{
22048
22049
22050
22051
22052
22053
22054
22055
22056
22057
22058
22059
22060
22061
22062
*/
SQLITE_API char *sqlite3_vsnprintf(int n, char *zBuf, const char *zFormat, va_list ap){
  StrAccum acc;
  if( n<=0 ) return zBuf;
#ifdef SQLITE_ENABLE_API_ARMOR
  if( zBuf==0 || zFormat==0 ) {
    (void)SQLITE_MISUSE_BKPT;
    if( zBuf && n>0 ) zBuf[0] = 0;
    return zBuf;
  }
#endif
  sqlite3StrAccumInit(&acc, zBuf, n, 0);
  acc.useMalloc = 0;
  sqlite3VXPrintf(&acc, 0, zFormat, ap);
  return sqlite3StrAccumFinish(&acc);







|







22095
22096
22097
22098
22099
22100
22101
22102
22103
22104
22105
22106
22107
22108
22109
*/
SQLITE_API char *sqlite3_vsnprintf(int n, char *zBuf, const char *zFormat, va_list ap){
  StrAccum acc;
  if( n<=0 ) return zBuf;
#ifdef SQLITE_ENABLE_API_ARMOR
  if( zBuf==0 || zFormat==0 ) {
    (void)SQLITE_MISUSE_BKPT;
    if( zBuf ) zBuf[0] = 0;
    return zBuf;
  }
#endif
  sqlite3StrAccumInit(&acc, zBuf, n, 0);
  acc.useMalloc = 0;
  sqlite3VXPrintf(&acc, 0, zFormat, ap);
  return sqlite3StrAccumFinish(&acc);
22431
22432
22433
22434
22435
22436
22437
22438
22439
22440
22441
22442
22443
22444
22445
}

#endif /* SQLITE_OS_UNIX && defined(SQLITE_MUTEX_PTHREADS) */
/******************************** End Unix Pthreads *************************/


/********************************* Win32 Threads ****************************/
#if SQLITE_OS_WIN && !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && SQLITE_THREADSAFE>0

#define SQLITE_THREADS_IMPLEMENTED 1  /* Prevent the single-thread code below */
#include <process.h>

/* A running thread */
struct SQLiteThread {
  void *tid;               /* The thread handle */







|







22478
22479
22480
22481
22482
22483
22484
22485
22486
22487
22488
22489
22490
22491
22492
}

#endif /* SQLITE_OS_UNIX && defined(SQLITE_MUTEX_PTHREADS) */
/******************************** End Unix Pthreads *************************/


/********************************* Win32 Threads ****************************/
#if SQLITE_OS_WIN_THREADS

#define SQLITE_THREADS_IMPLEMENTED 1  /* Prevent the single-thread code below */
#include <process.h>

/* A running thread */
struct SQLiteThread {
  void *tid;               /* The thread handle */
22524
22525
22526
22527
22528
22529
22530
22531
22532
22533
22534
22535
22536
22537
22538
    assert( bRc );
  }
  if( rc==WAIT_OBJECT_0 ) *ppOut = p->pResult;
  sqlite3_free(p);
  return (rc==WAIT_OBJECT_0) ? SQLITE_OK : SQLITE_ERROR;
}

#endif /* SQLITE_OS_WIN && !SQLITE_OS_WINCE && !SQLITE_OS_WINRT */
/******************************** End Win32 Threads *************************/


/********************************* Single-Threaded **************************/
#ifndef SQLITE_THREADS_IMPLEMENTED
/*
** This implementation does not actually create a new thread.  It does the







|







22571
22572
22573
22574
22575
22576
22577
22578
22579
22580
22581
22582
22583
22584
22585
    assert( bRc );
  }
  if( rc==WAIT_OBJECT_0 ) *ppOut = p->pResult;
  sqlite3_free(p);
  return (rc==WAIT_OBJECT_0) ? SQLITE_OK : SQLITE_ERROR;
}

#endif /* SQLITE_OS_WIN_THREADS */
/******************************** End Win32 Threads *************************/


/********************************* Single-Threaded **************************/
#ifndef SQLITE_THREADS_IMPLEMENTED
/*
** This implementation does not actually create a new thread.  It does the
25199
25200
25201
25202
25203
25204
25205
25206
25207
25208
25209
25210
25211
25212
25213
#else
# define UNIXFILE_DIRSYNC    0x00
#endif
#define UNIXFILE_PSOW        0x10     /* SQLITE_IOCAP_POWERSAFE_OVERWRITE */
#define UNIXFILE_DELETE      0x20     /* Delete on close */
#define UNIXFILE_URI         0x40     /* Filename might have query parameters */
#define UNIXFILE_NOLOCK      0x80     /* Do no file locking */
#define UNIXFILE_WARNED    0x0100     /* verifyDbFile() warnings have been issued */

/*
** Include code that is common to all os_*.c files
*/
/************** Include os_common.h in the middle of os_unix.c ***************/
/************** Begin file os_common.h ***************************************/
/*







|







25246
25247
25248
25249
25250
25251
25252
25253
25254
25255
25256
25257
25258
25259
25260
#else
# define UNIXFILE_DIRSYNC    0x00
#endif
#define UNIXFILE_PSOW        0x10     /* SQLITE_IOCAP_POWERSAFE_OVERWRITE */
#define UNIXFILE_DELETE      0x20     /* Delete on close */
#define UNIXFILE_URI         0x40     /* Filename might have query parameters */
#define UNIXFILE_NOLOCK      0x80     /* Do no file locking */
#define UNIXFILE_WARNED    0x0100     /* verifyDbFile() warnings issued */

/*
** Include code that is common to all os_*.c files
*/
/************** Include os_common.h in the middle of os_unix.c ***************/
/************** Begin file os_common.h ***************************************/
/*
25870
25871
25872
25873
25874
25875
25876
25877
25878
25879
25880
25881
25882
25883
25884
25885
25886
#undef osFcntl
#define osFcntl lockTrace
#endif /* SQLITE_LOCK_TRACE */

/*
** Retry ftruncate() calls that fail due to EINTR
**
** All calls to ftruncate() within this file should be made through this wrapper.
** On the Android platform, bypassing the logic below could lead to a corrupt
** database.
*/
static int robust_ftruncate(int h, sqlite3_int64 sz){
  int rc;
#ifdef __ANDROID__
  /* On Android, ftruncate() always uses 32-bit offsets, even if 
  ** _FILE_OFFSET_BITS=64 is defined. This means it is unsafe to attempt to
  ** truncate a file to any size larger than 2GiB. Silently ignore any







|
|
|







25917
25918
25919
25920
25921
25922
25923
25924
25925
25926
25927
25928
25929
25930
25931
25932
25933
#undef osFcntl
#define osFcntl lockTrace
#endif /* SQLITE_LOCK_TRACE */

/*
** Retry ftruncate() calls that fail due to EINTR
**
** All calls to ftruncate() within this file should be made through
** this wrapper.  On the Android platform, bypassing the logic below
** could lead to a corrupt database.
*/
static int robust_ftruncate(int h, sqlite3_int64 sz){
  int rc;
#ifdef __ANDROID__
  /* On Android, ftruncate() always uses 32-bit offsets, even if 
  ** _FILE_OFFSET_BITS=64 is defined. This means it is unsafe to attempt to
  ** truncate a file to any size larger than 2GiB. Silently ignore any
26330
26331
26332
26333
26334
26335
26336








26337
26338
26339
26340
26341
26342
26343
*/
static void robust_close(unixFile *pFile, int h, int lineno){
  if( osClose(h) ){
    unixLogErrorAtLine(SQLITE_IOERR_CLOSE, "close",
                       pFile ? pFile->zPath : 0, lineno);
  }
}









/*
** Close all file descriptors accumuated in the unixInodeInfo->pUnused list.
*/ 
static void closePendingFds(unixFile *pFile){
  unixInodeInfo *pInode = pFile->pInode;
  UnixUnusedFd *p;







>
>
>
>
>
>
>
>







26377
26378
26379
26380
26381
26382
26383
26384
26385
26386
26387
26388
26389
26390
26391
26392
26393
26394
26395
26396
26397
26398
*/
static void robust_close(unixFile *pFile, int h, int lineno){
  if( osClose(h) ){
    unixLogErrorAtLine(SQLITE_IOERR_CLOSE, "close",
                       pFile ? pFile->zPath : 0, lineno);
  }
}

/*
** Set the pFile->lastErrno.  Do this in a subroutine as that provides
** a convenient place to set a breakpoint.
*/
static void storeLastErrno(unixFile *pFile, int error){
  pFile->lastErrno = error;
}

/*
** Close all file descriptors accumuated in the unixInodeInfo->pUnused list.
*/ 
static void closePendingFds(unixFile *pFile){
  unixInodeInfo *pInode = pFile->pInode;
  UnixUnusedFd *p;
26404
26405
26406
26407
26408
26409
26410
26411
26412
26413
26414
26415
26416
26417
26418
26419
26420
26421
26422
26423
26424
26425
26426
26427
26428
26429
26430
26431
26432
26433
26434
26435
26436
26437
26438
26439
26440
26441
26442
26443
26444

  /* Get low-level information about the file that we can used to
  ** create a unique name for the file.
  */
  fd = pFile->h;
  rc = osFstat(fd, &statbuf);
  if( rc!=0 ){
    pFile->lastErrno = errno;
#ifdef EOVERFLOW
    if( pFile->lastErrno==EOVERFLOW ) return SQLITE_NOLFS;
#endif
    return SQLITE_IOERR;
  }

#ifdef __APPLE__
  /* On OS X on an msdos filesystem, the inode number is reported
  ** incorrectly for zero-size files.  See ticket #3260.  To work
  ** around this problem (we consider it a bug in OS X, not SQLite)
  ** we always increase the file size to 1 by writing a single byte
  ** prior to accessing the inode number.  The one byte written is
  ** an ASCII 'S' character which also happens to be the first byte
  ** in the header of every SQLite database.  In this way, if there
  ** is a race condition such that another thread has already populated
  ** the first page of the database, no damage is done.
  */
  if( statbuf.st_size==0 && (pFile->fsFlags & SQLITE_FSFLAGS_IS_MSDOS)!=0 ){
    do{ rc = osWrite(fd, "S", 1); }while( rc<0 && errno==EINTR );
    if( rc!=1 ){
      pFile->lastErrno = errno;
      return SQLITE_IOERR;
    }
    rc = osFstat(fd, &statbuf);
    if( rc!=0 ){
      pFile->lastErrno = errno;
      return SQLITE_IOERR;
    }
  }
#endif

  memset(&fileId, 0, sizeof(fileId));
  fileId.dev = statbuf.st_dev;







|




















|




|







26459
26460
26461
26462
26463
26464
26465
26466
26467
26468
26469
26470
26471
26472
26473
26474
26475
26476
26477
26478
26479
26480
26481
26482
26483
26484
26485
26486
26487
26488
26489
26490
26491
26492
26493
26494
26495
26496
26497
26498
26499

  /* Get low-level information about the file that we can used to
  ** create a unique name for the file.
  */
  fd = pFile->h;
  rc = osFstat(fd, &statbuf);
  if( rc!=0 ){
    storeLastErrno(pFile, errno);
#ifdef EOVERFLOW
    if( pFile->lastErrno==EOVERFLOW ) return SQLITE_NOLFS;
#endif
    return SQLITE_IOERR;
  }

#ifdef __APPLE__
  /* On OS X on an msdos filesystem, the inode number is reported
  ** incorrectly for zero-size files.  See ticket #3260.  To work
  ** around this problem (we consider it a bug in OS X, not SQLite)
  ** we always increase the file size to 1 by writing a single byte
  ** prior to accessing the inode number.  The one byte written is
  ** an ASCII 'S' character which also happens to be the first byte
  ** in the header of every SQLite database.  In this way, if there
  ** is a race condition such that another thread has already populated
  ** the first page of the database, no damage is done.
  */
  if( statbuf.st_size==0 && (pFile->fsFlags & SQLITE_FSFLAGS_IS_MSDOS)!=0 ){
    do{ rc = osWrite(fd, "S", 1); }while( rc<0 && errno==EINTR );
    if( rc!=1 ){
      storeLastErrno(pFile, errno);
      return SQLITE_IOERR;
    }
    rc = osFstat(fd, &statbuf);
    if( rc!=0 ){
      storeLastErrno(pFile, errno);
      return SQLITE_IOERR;
    }
  }
#endif

  memset(&fileId, 0, sizeof(fileId));
  fileId.dev = statbuf.st_dev;
26553
26554
26555
26556
26557
26558
26559
26560
26561
26562
26563
26564
26565
26566
26567
    struct flock lock;
    lock.l_whence = SEEK_SET;
    lock.l_start = RESERVED_BYTE;
    lock.l_len = 1;
    lock.l_type = F_WRLCK;
    if( osFcntl(pFile->h, F_GETLK, &lock) ){
      rc = SQLITE_IOERR_CHECKRESERVEDLOCK;
      pFile->lastErrno = errno;
    } else if( lock.l_type!=F_UNLCK ){
      reserved = 1;
    }
  }
#endif
  
  unixLeaveMutex();







|







26608
26609
26610
26611
26612
26613
26614
26615
26616
26617
26618
26619
26620
26621
26622
    struct flock lock;
    lock.l_whence = SEEK_SET;
    lock.l_start = RESERVED_BYTE;
    lock.l_len = 1;
    lock.l_type = F_WRLCK;
    if( osFcntl(pFile->h, F_GETLK, &lock) ){
      rc = SQLITE_IOERR_CHECKRESERVEDLOCK;
      storeLastErrno(pFile, errno);
    } else if( lock.l_type!=F_UNLCK ){
      reserved = 1;
    }
  }
#endif
  
  unixLeaveMutex();
26753
26754
26755
26756
26757
26758
26759
26760
26761
26762
26763
26764
26765
26766
26767
  ){
    lock.l_type = (eFileLock==SHARED_LOCK?F_RDLCK:F_WRLCK);
    lock.l_start = PENDING_BYTE;
    if( unixFileLock(pFile, &lock) ){
      tErrno = errno;
      rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
      if( rc!=SQLITE_BUSY ){
        pFile->lastErrno = tErrno;
      }
      goto end_lock;
    }
  }


  /* If control gets to this point, then actually go ahead and make







|







26808
26809
26810
26811
26812
26813
26814
26815
26816
26817
26818
26819
26820
26821
26822
  ){
    lock.l_type = (eFileLock==SHARED_LOCK?F_RDLCK:F_WRLCK);
    lock.l_start = PENDING_BYTE;
    if( unixFileLock(pFile, &lock) ){
      tErrno = errno;
      rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
      if( rc!=SQLITE_BUSY ){
        storeLastErrno(pFile, tErrno);
      }
      goto end_lock;
    }
  }


  /* If control gets to this point, then actually go ahead and make
26788
26789
26790
26791
26792
26793
26794
26795
26796
26797
26798
26799
26800
26801
26802
      /* This could happen with a network mount */
      tErrno = errno;
      rc = SQLITE_IOERR_UNLOCK; 
    }

    if( rc ){
      if( rc!=SQLITE_BUSY ){
        pFile->lastErrno = tErrno;
      }
      goto end_lock;
    }else{
      pFile->eFileLock = SHARED_LOCK;
      pInode->nLock++;
      pInode->nShared = 1;
    }







|







26843
26844
26845
26846
26847
26848
26849
26850
26851
26852
26853
26854
26855
26856
26857
      /* This could happen with a network mount */
      tErrno = errno;
      rc = SQLITE_IOERR_UNLOCK; 
    }

    if( rc ){
      if( rc!=SQLITE_BUSY ){
        storeLastErrno(pFile, tErrno);
      }
      goto end_lock;
    }else{
      pFile->eFileLock = SHARED_LOCK;
      pInode->nLock++;
      pInode->nShared = 1;
    }
26821
26822
26823
26824
26825
26826
26827
26828
26829
26830
26831
26832
26833
26834
26835
      lock.l_len = SHARED_SIZE;
    }

    if( unixFileLock(pFile, &lock) ){
      tErrno = errno;
      rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
      if( rc!=SQLITE_BUSY ){
        pFile->lastErrno = tErrno;
      }
    }
  }
  

#ifdef SQLITE_DEBUG
  /* Set up the transaction-counter change checking flags when







|







26876
26877
26878
26879
26880
26881
26882
26883
26884
26885
26886
26887
26888
26889
26890
      lock.l_len = SHARED_SIZE;
    }

    if( unixFileLock(pFile, &lock) ){
      tErrno = errno;
      rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
      if( rc!=SQLITE_BUSY ){
        storeLastErrno(pFile, tErrno);
      }
    }
  }
  

#ifdef SQLITE_DEBUG
  /* Set up the transaction-counter change checking flags when
26928
26929
26930
26931
26932
26933
26934
26935
26936
26937
26938
26939
26940
26941
26942
26943
26944
26945
26946
26947
26948
26949
26950
26951
26952
26953
26954
26955
26956
26957
26958
26959
26960
26961
26962
26963
26964
26965
26966
26967
26968
26969
26970
26971
26972
26973
26974
26975
26976
26977
26978
26979
26980
26981
26982
26983
26984
26985
26986
26987
26988
26989
26990
26991
26992
26993
26994
26995
26996
26997
26998
26999
27000
27001
27002
27003
27004
27005
27006
27007
27008
27009
27010
27011
27012
27013
27014
27015
27016
27017
27018
27019
27020
27021
27022
27023
27024
27025
27026
27027
27028
27029
27030
27031
27032
27033
27034
    ** write lock until the rest is covered by a read lock:
    **  1:   [WWWWW]
    **  2:   [....W]
    **  3:   [RRRRW]
    **  4:   [RRRR.]
    */
    if( eFileLock==SHARED_LOCK ){

#if !defined(__APPLE__) || !SQLITE_ENABLE_LOCKING_STYLE
      (void)handleNFSUnlock;
      assert( handleNFSUnlock==0 );
#endif
#if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE
      if( handleNFSUnlock ){
        int tErrno;               /* Error code from system call errors */
        off_t divSize = SHARED_SIZE - 1;
        
        lock.l_type = F_UNLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST;
        lock.l_len = divSize;
        if( unixFileLock(pFile, &lock)==(-1) ){
          tErrno = errno;
          rc = SQLITE_IOERR_UNLOCK;
          if( IS_LOCK_ERROR(rc) ){
            pFile->lastErrno = tErrno;
          }
          goto end_unlock;
        }
        lock.l_type = F_RDLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST;
        lock.l_len = divSize;
        if( unixFileLock(pFile, &lock)==(-1) ){
          tErrno = errno;
          rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_RDLOCK);
          if( IS_LOCK_ERROR(rc) ){
            pFile->lastErrno = tErrno;
          }
          goto end_unlock;
        }
        lock.l_type = F_UNLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST+divSize;
        lock.l_len = SHARED_SIZE-divSize;
        if( unixFileLock(pFile, &lock)==(-1) ){
          tErrno = errno;
          rc = SQLITE_IOERR_UNLOCK;
          if( IS_LOCK_ERROR(rc) ){
            pFile->lastErrno = tErrno;
          }
          goto end_unlock;
        }
      }else
#endif /* defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE */
      {
        lock.l_type = F_RDLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST;
        lock.l_len = SHARED_SIZE;
        if( unixFileLock(pFile, &lock) ){
          /* In theory, the call to unixFileLock() cannot fail because another
          ** process is holding an incompatible lock. If it does, this 
          ** indicates that the other process is not following the locking
          ** protocol. If this happens, return SQLITE_IOERR_RDLOCK. Returning
          ** SQLITE_BUSY would confuse the upper layer (in practice it causes 
          ** an assert to fail). */ 
          rc = SQLITE_IOERR_RDLOCK;
          pFile->lastErrno = errno;
          goto end_unlock;
        }
      }
    }
    lock.l_type = F_UNLCK;
    lock.l_whence = SEEK_SET;
    lock.l_start = PENDING_BYTE;
    lock.l_len = 2L;  assert( PENDING_BYTE+1==RESERVED_BYTE );
    if( unixFileLock(pFile, &lock)==0 ){
      pInode->eFileLock = SHARED_LOCK;
    }else{
      rc = SQLITE_IOERR_UNLOCK;
      pFile->lastErrno = errno;
      goto end_unlock;
    }
  }
  if( eFileLock==NO_LOCK ){
    /* Decrement the shared lock counter.  Release the lock using an
    ** OS call only when all threads in this same process have released
    ** the lock.
    */
    pInode->nShared--;
    if( pInode->nShared==0 ){
      lock.l_type = F_UNLCK;
      lock.l_whence = SEEK_SET;
      lock.l_start = lock.l_len = 0L;
      if( unixFileLock(pFile, &lock)==0 ){
        pInode->eFileLock = NO_LOCK;
      }else{
        rc = SQLITE_IOERR_UNLOCK;
        pFile->lastErrno = errno;
        pInode->eFileLock = NO_LOCK;
        pFile->eFileLock = NO_LOCK;
      }
    }

    /* Decrement the count of locks against this same file.  When the
    ** count reaches zero, close any other file descriptors whose close







<

















|











|











|


















|












|

















|







26983
26984
26985
26986
26987
26988
26989

26990
26991
26992
26993
26994
26995
26996
26997
26998
26999
27000
27001
27002
27003
27004
27005
27006
27007
27008
27009
27010
27011
27012
27013
27014
27015
27016
27017
27018
27019
27020
27021
27022
27023
27024
27025
27026
27027
27028
27029
27030
27031
27032
27033
27034
27035
27036
27037
27038
27039
27040
27041
27042
27043
27044
27045
27046
27047
27048
27049
27050
27051
27052
27053
27054
27055
27056
27057
27058
27059
27060
27061
27062
27063
27064
27065
27066
27067
27068
27069
27070
27071
27072
27073
27074
27075
27076
27077
27078
27079
27080
27081
27082
27083
27084
27085
27086
27087
27088
    ** write lock until the rest is covered by a read lock:
    **  1:   [WWWWW]
    **  2:   [....W]
    **  3:   [RRRRW]
    **  4:   [RRRR.]
    */
    if( eFileLock==SHARED_LOCK ){

#if !defined(__APPLE__) || !SQLITE_ENABLE_LOCKING_STYLE
      (void)handleNFSUnlock;
      assert( handleNFSUnlock==0 );
#endif
#if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE
      if( handleNFSUnlock ){
        int tErrno;               /* Error code from system call errors */
        off_t divSize = SHARED_SIZE - 1;
        
        lock.l_type = F_UNLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST;
        lock.l_len = divSize;
        if( unixFileLock(pFile, &lock)==(-1) ){
          tErrno = errno;
          rc = SQLITE_IOERR_UNLOCK;
          if( IS_LOCK_ERROR(rc) ){
            storeLastErrno(pFile, tErrno);
          }
          goto end_unlock;
        }
        lock.l_type = F_RDLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST;
        lock.l_len = divSize;
        if( unixFileLock(pFile, &lock)==(-1) ){
          tErrno = errno;
          rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_RDLOCK);
          if( IS_LOCK_ERROR(rc) ){
            storeLastErrno(pFile, tErrno);
          }
          goto end_unlock;
        }
        lock.l_type = F_UNLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST+divSize;
        lock.l_len = SHARED_SIZE-divSize;
        if( unixFileLock(pFile, &lock)==(-1) ){
          tErrno = errno;
          rc = SQLITE_IOERR_UNLOCK;
          if( IS_LOCK_ERROR(rc) ){
            storeLastErrno(pFile, tErrno);
          }
          goto end_unlock;
        }
      }else
#endif /* defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE */
      {
        lock.l_type = F_RDLCK;
        lock.l_whence = SEEK_SET;
        lock.l_start = SHARED_FIRST;
        lock.l_len = SHARED_SIZE;
        if( unixFileLock(pFile, &lock) ){
          /* In theory, the call to unixFileLock() cannot fail because another
          ** process is holding an incompatible lock. If it does, this 
          ** indicates that the other process is not following the locking
          ** protocol. If this happens, return SQLITE_IOERR_RDLOCK. Returning
          ** SQLITE_BUSY would confuse the upper layer (in practice it causes 
          ** an assert to fail). */ 
          rc = SQLITE_IOERR_RDLOCK;
          storeLastErrno(pFile, errno);
          goto end_unlock;
        }
      }
    }
    lock.l_type = F_UNLCK;
    lock.l_whence = SEEK_SET;
    lock.l_start = PENDING_BYTE;
    lock.l_len = 2L;  assert( PENDING_BYTE+1==RESERVED_BYTE );
    if( unixFileLock(pFile, &lock)==0 ){
      pInode->eFileLock = SHARED_LOCK;
    }else{
      rc = SQLITE_IOERR_UNLOCK;
      storeLastErrno(pFile, errno);
      goto end_unlock;
    }
  }
  if( eFileLock==NO_LOCK ){
    /* Decrement the shared lock counter.  Release the lock using an
    ** OS call only when all threads in this same process have released
    ** the lock.
    */
    pInode->nShared--;
    if( pInode->nShared==0 ){
      lock.l_type = F_UNLCK;
      lock.l_whence = SEEK_SET;
      lock.l_start = lock.l_len = 0L;
      if( unixFileLock(pFile, &lock)==0 ){
        pInode->eFileLock = NO_LOCK;
      }else{
        rc = SQLITE_IOERR_UNLOCK;
        storeLastErrno(pFile, errno);
        pInode->eFileLock = NO_LOCK;
        pFile->eFileLock = NO_LOCK;
      }
    }

    /* Decrement the count of locks against this same file.  When the
    ** count reaches zero, close any other file descriptors whose close
27295
27296
27297
27298
27299
27300
27301
27302
27303
27304
27305
27306
27307
27308
27309
    /* failed to open/create the lock directory */
    int tErrno = errno;
    if( EEXIST == tErrno ){
      rc = SQLITE_BUSY;
    } else {
      rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
      if( IS_LOCK_ERROR(rc) ){
        pFile->lastErrno = tErrno;
      }
    }
    return rc;
  } 
  
  /* got it, set the type and return ok */
  pFile->eFileLock = eFileLock;







|







27349
27350
27351
27352
27353
27354
27355
27356
27357
27358
27359
27360
27361
27362
27363
    /* failed to open/create the lock directory */
    int tErrno = errno;
    if( EEXIST == tErrno ){
      rc = SQLITE_BUSY;
    } else {
      rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
      if( IS_LOCK_ERROR(rc) ){
        storeLastErrno(pFile, tErrno);
      }
    }
    return rc;
  } 
  
  /* got it, set the type and return ok */
  pFile->eFileLock = eFileLock;
27349
27350
27351
27352
27353
27354
27355
27356
27357
27358
27359
27360
27361
27362
27363
  if( rc<0 ){
    int tErrno = errno;
    rc = 0;
    if( ENOENT != tErrno ){
      rc = SQLITE_IOERR_UNLOCK;
    }
    if( IS_LOCK_ERROR(rc) ){
      pFile->lastErrno = tErrno;
    }
    return rc; 
  }
  pFile->eFileLock = NO_LOCK;
  return SQLITE_OK;
}








|







27403
27404
27405
27406
27407
27408
27409
27410
27411
27412
27413
27414
27415
27416
27417
  if( rc<0 ){
    int tErrno = errno;
    rc = 0;
    if( ENOENT != tErrno ){
      rc = SQLITE_IOERR_UNLOCK;
    }
    if( IS_LOCK_ERROR(rc) ){
      storeLastErrno(pFile, tErrno);
    }
    return rc; 
  }
  pFile->eFileLock = NO_LOCK;
  return SQLITE_OK;
}

27436
27437
27438
27439
27440
27441
27442
27443
27444
27445
27446
27447
27448
27449
27450
27451
27452
27453
27454
27455
27456
27457
27458
27459
27460
      /* got the lock, unlock it */
      lrc = robust_flock(pFile->h, LOCK_UN);
      if ( lrc ) {
        int tErrno = errno;
        /* unlock failed with an error */
        lrc = SQLITE_IOERR_UNLOCK; 
        if( IS_LOCK_ERROR(lrc) ){
          pFile->lastErrno = tErrno;
          rc = lrc;
        }
      }
    } else {
      int tErrno = errno;
      reserved = 1;
      /* someone else might have it reserved */
      lrc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); 
      if( IS_LOCK_ERROR(lrc) ){
        pFile->lastErrno = tErrno;
        rc = lrc;
      }
    }
  }
  OSTRACE(("TEST WR-LOCK %d %d %d (flock)\n", pFile->h, rc, reserved));

#ifdef SQLITE_IGNORE_FLOCK_LOCK_ERRORS







|









|







27490
27491
27492
27493
27494
27495
27496
27497
27498
27499
27500
27501
27502
27503
27504
27505
27506
27507
27508
27509
27510
27511
27512
27513
27514
      /* got the lock, unlock it */
      lrc = robust_flock(pFile->h, LOCK_UN);
      if ( lrc ) {
        int tErrno = errno;
        /* unlock failed with an error */
        lrc = SQLITE_IOERR_UNLOCK; 
        if( IS_LOCK_ERROR(lrc) ){
          storeLastErrno(pFile, tErrno);
          rc = lrc;
        }
      }
    } else {
      int tErrno = errno;
      reserved = 1;
      /* someone else might have it reserved */
      lrc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK); 
      if( IS_LOCK_ERROR(lrc) ){
        storeLastErrno(pFile, tErrno);
        rc = lrc;
      }
    }
  }
  OSTRACE(("TEST WR-LOCK %d %d %d (flock)\n", pFile->h, rc, reserved));

#ifdef SQLITE_IGNORE_FLOCK_LOCK_ERRORS
27512
27513
27514
27515
27516
27517
27518
27519
27520
27521
27522
27523
27524
27525
27526
  /* grab an exclusive lock */
  
  if (robust_flock(pFile->h, LOCK_EX | LOCK_NB)) {
    int tErrno = errno;
    /* didn't get, must be busy */
    rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
    if( IS_LOCK_ERROR(rc) ){
      pFile->lastErrno = tErrno;
    }
  } else {
    /* got it, set the type and return ok */
    pFile->eFileLock = eFileLock;
  }
  OSTRACE(("LOCK    %d %s %s (flock)\n", pFile->h, azFileLock(eFileLock), 
           rc==SQLITE_OK ? "ok" : "failed"));







|







27566
27567
27568
27569
27570
27571
27572
27573
27574
27575
27576
27577
27578
27579
27580
  /* grab an exclusive lock */
  
  if (robust_flock(pFile->h, LOCK_EX | LOCK_NB)) {
    int tErrno = errno;
    /* didn't get, must be busy */
    rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_LOCK);
    if( IS_LOCK_ERROR(rc) ){
      storeLastErrno(pFile, tErrno);
    }
  } else {
    /* got it, set the type and return ok */
    pFile->eFileLock = eFileLock;
  }
  OSTRACE(("LOCK    %d %s %s (flock)\n", pFile->h, azFileLock(eFileLock), 
           rc==SQLITE_OK ? "ok" : "failed"));
27624
27625
27626
27627
27628
27629
27630
27631
27632
27633
27634
27635
27636
27637
27638
  if( !reserved ){
    sem_t *pSem = pFile->pInode->pSem;

    if( sem_trywait(pSem)==-1 ){
      int tErrno = errno;
      if( EAGAIN != tErrno ){
        rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_CHECKRESERVEDLOCK);
        pFile->lastErrno = tErrno;
      } else {
        /* someone else has the lock when we are in NO_LOCK */
        reserved = (pFile->eFileLock < SHARED_LOCK);
      }
    }else{
      /* we could have it if we want it */
      sem_post(pSem);







|







27678
27679
27680
27681
27682
27683
27684
27685
27686
27687
27688
27689
27690
27691
27692
  if( !reserved ){
    sem_t *pSem = pFile->pInode->pSem;

    if( sem_trywait(pSem)==-1 ){
      int tErrno = errno;
      if( EAGAIN != tErrno ){
        rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_CHECKRESERVEDLOCK);
        storeLastErrno(pFile, tErrno);
      } else {
        /* someone else has the lock when we are in NO_LOCK */
        reserved = (pFile->eFileLock < SHARED_LOCK);
      }
    }else{
      /* we could have it if we want it */
      sem_post(pSem);
27728
27729
27730
27731
27732
27733
27734
27735
27736
27737
27738
27739
27740
27741
27742
  }
  
  /* no, really unlock. */
  if ( sem_post(pSem)==-1 ) {
    int rc, tErrno = errno;
    rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK);
    if( IS_LOCK_ERROR(rc) ){
      pFile->lastErrno = tErrno;
    }
    return rc; 
  }
  pFile->eFileLock = NO_LOCK;
  return SQLITE_OK;
}








|







27782
27783
27784
27785
27786
27787
27788
27789
27790
27791
27792
27793
27794
27795
27796
  }
  
  /* no, really unlock. */
  if ( sem_post(pSem)==-1 ) {
    int rc, tErrno = errno;
    rc = sqliteErrorFromPosixError(tErrno, SQLITE_IOERR_UNLOCK);
    if( IS_LOCK_ERROR(rc) ){
      storeLastErrno(pFile, tErrno);
    }
    return rc; 
  }
  pFile->eFileLock = NO_LOCK;
  return SQLITE_OK;
}

27830
27831
27832
27833
27834
27835
27836
27837
27838
27839
27840
27841
27842
27843
27844
#ifdef SQLITE_IGNORE_AFP_LOCK_ERRORS
    rc = SQLITE_BUSY;
#else
    rc = sqliteErrorFromPosixError(tErrno,
                    setLockFlag ? SQLITE_IOERR_LOCK : SQLITE_IOERR_UNLOCK);
#endif /* SQLITE_IGNORE_AFP_LOCK_ERRORS */
    if( IS_LOCK_ERROR(rc) ){
      pFile->lastErrno = tErrno;
    }
    return rc;
  } else {
    return SQLITE_OK;
  }
}








|







27884
27885
27886
27887
27888
27889
27890
27891
27892
27893
27894
27895
27896
27897
27898
#ifdef SQLITE_IGNORE_AFP_LOCK_ERRORS
    rc = SQLITE_BUSY;
#else
    rc = sqliteErrorFromPosixError(tErrno,
                    setLockFlag ? SQLITE_IOERR_LOCK : SQLITE_IOERR_UNLOCK);
#endif /* SQLITE_IGNORE_AFP_LOCK_ERRORS */
    if( IS_LOCK_ERROR(rc) ){
      storeLastErrno(pFile, tErrno);
    }
    return rc;
  } else {
    return SQLITE_OK;
  }
}

28013
28014
28015
28016
28017
28018
28019
28020
28021
28022
28023
28024
28025
28026
28027
    if( IS_LOCK_ERROR(lrc1) ){
      lrc1Errno = pFile->lastErrno;
    }
    /* Drop the temporary PENDING lock */
    lrc2 = afpSetLock(context->dbPath, pFile, PENDING_BYTE, 1, 0);
    
    if( IS_LOCK_ERROR(lrc1) ) {
      pFile->lastErrno = lrc1Errno;
      rc = lrc1;
      goto afp_end_lock;
    } else if( IS_LOCK_ERROR(lrc2) ){
      rc = lrc2;
      goto afp_end_lock;
    } else if( lrc1 != SQLITE_OK ) {
      rc = lrc1;







|







28067
28068
28069
28070
28071
28072
28073
28074
28075
28076
28077
28078
28079
28080
28081
    if( IS_LOCK_ERROR(lrc1) ){
      lrc1Errno = pFile->lastErrno;
    }
    /* Drop the temporary PENDING lock */
    lrc2 = afpSetLock(context->dbPath, pFile, PENDING_BYTE, 1, 0);
    
    if( IS_LOCK_ERROR(lrc1) ) {
      storeLastErrno(pFile, lrc1Errno);
      rc = lrc1;
      goto afp_end_lock;
    } else if( IS_LOCK_ERROR(lrc2) ){
      rc = lrc2;
      goto afp_end_lock;
    } else if( lrc1 != SQLITE_OK ) {
      rc = lrc1;
28300
28301
28302
28303
28304
28305
28306
28307
28308
28309
28310
28311
28312
28313
28314
28315
28316
28317
28318
28319
28320
28321
28322
28323
28324
28325
28326
    got = osPread64(id->h, pBuf, cnt, offset);
    SimulateIOError( got = -1 );
#else
    newOffset = lseek(id->h, offset, SEEK_SET);
    SimulateIOError( newOffset-- );
    if( newOffset!=offset ){
      if( newOffset == -1 ){
        ((unixFile*)id)->lastErrno = errno;
      }else{
        ((unixFile*)id)->lastErrno = 0;
      }
      return -1;
    }
    got = osRead(id->h, pBuf, cnt);
#endif
    if( got==cnt ) break;
    if( got<0 ){
      if( errno==EINTR ){ got = 1; continue; }
      prior = 0;
      ((unixFile*)id)->lastErrno = errno;
      break;
    }else if( got>0 ){
      cnt -= got;
      offset += got;
      prior += got;
      pBuf = (void*)(got + (char*)pBuf);
    }







|

|









|







28354
28355
28356
28357
28358
28359
28360
28361
28362
28363
28364
28365
28366
28367
28368
28369
28370
28371
28372
28373
28374
28375
28376
28377
28378
28379
28380
    got = osPread64(id->h, pBuf, cnt, offset);
    SimulateIOError( got = -1 );
#else
    newOffset = lseek(id->h, offset, SEEK_SET);
    SimulateIOError( newOffset-- );
    if( newOffset!=offset ){
      if( newOffset == -1 ){
        storeLastErrno((unixFile*)id, errno);
      }else{
        storeLastErrno((unixFile*)id, 0);
      }
      return -1;
    }
    got = osRead(id->h, pBuf, cnt);
#endif
    if( got==cnt ) break;
    if( got<0 ){
      if( errno==EINTR ){ got = 1; continue; }
      prior = 0;
      storeLastErrno((unixFile*)id,  errno);
      break;
    }else if( got>0 ){
      cnt -= got;
      offset += got;
      prior += got;
      pBuf = (void*)(got + (char*)pBuf);
    }
28377
28378
28379
28380
28381
28382
28383
28384
28385
28386
28387
28388
28389
28390
28391
  got = seekAndRead(pFile, offset, pBuf, amt);
  if( got==amt ){
    return SQLITE_OK;
  }else if( got<0 ){
    /* lastErrno set by seekAndRead */
    return SQLITE_IOERR_READ;
  }else{
    pFile->lastErrno = 0; /* not a system error */
    /* Unread parts of the buffer must be zero-filled */
    memset(&((char*)pBuf)[got], 0, amt-got);
    return SQLITE_IOERR_SHORT_READ;
  }
}

/*







|







28431
28432
28433
28434
28435
28436
28437
28438
28439
28440
28441
28442
28443
28444
28445
  got = seekAndRead(pFile, offset, pBuf, amt);
  if( got==amt ){
    return SQLITE_OK;
  }else if( got<0 ){
    /* lastErrno set by seekAndRead */
    return SQLITE_IOERR_READ;
  }else{
    storeLastErrno(pFile, 0);   /* not a system error */
    /* Unread parts of the buffer must be zero-filled */
    memset(&((char*)pBuf)[got], 0, amt-got);
    return SQLITE_IOERR_SHORT_READ;
  }
}

/*
28406
28407
28408
28409
28410
28411
28412
28413
28414
28415
28416
28417
28418
28419
28420
28421
28422

  assert( nBuf==(nBuf&0x1ffff) );
  assert( fd>2 );
  nBuf &= 0x1ffff;
  TIMER_START;

#if defined(USE_PREAD)
  do{ rc = osPwrite(fd, pBuf, nBuf, iOff); }while( rc<0 && errno==EINTR );
#elif defined(USE_PREAD64)
  do{ rc = osPwrite64(fd, pBuf, nBuf, iOff);}while( rc<0 && errno==EINTR);
#else
  do{
    i64 iSeek = lseek(fd, iOff, SEEK_SET);
    SimulateIOError( iSeek-- );

    if( iSeek!=iOff ){
      if( piErrno ) *piErrno = (iSeek==-1 ? errno : 0);







|

|







28460
28461
28462
28463
28464
28465
28466
28467
28468
28469
28470
28471
28472
28473
28474
28475
28476

  assert( nBuf==(nBuf&0x1ffff) );
  assert( fd>2 );
  nBuf &= 0x1ffff;
  TIMER_START;

#if defined(USE_PREAD)
  do{ rc = (int)osPwrite(fd, pBuf, nBuf, iOff); }while( rc<0 && errno==EINTR );
#elif defined(USE_PREAD64)
  do{ rc = (int)osPwrite64(fd, pBuf, nBuf, iOff);}while( rc<0 && errno==EINTR);
#else
  do{
    i64 iSeek = lseek(fd, iOff, SEEK_SET);
    SimulateIOError( iSeek-- );

    if( iSeek!=iOff ){
      if( piErrno ) *piErrno = (iSeek==-1 ? errno : 0);
28518
28519
28520
28521
28522
28523
28524
28525
28526
28527
28528
28529
28530
28531
28532
  SimulateDiskfullError(( wrote=0, amt=1 ));

  if( amt>0 ){
    if( wrote<0 && pFile->lastErrno!=ENOSPC ){
      /* lastErrno set by seekAndWrite */
      return SQLITE_IOERR_WRITE;
    }else{
      pFile->lastErrno = 0; /* not a system error */
      return SQLITE_FULL;
    }
  }

  return SQLITE_OK;
}








|







28572
28573
28574
28575
28576
28577
28578
28579
28580
28581
28582
28583
28584
28585
28586
  SimulateDiskfullError(( wrote=0, amt=1 ));

  if( amt>0 ){
    if( wrote<0 && pFile->lastErrno!=ENOSPC ){
      /* lastErrno set by seekAndWrite */
      return SQLITE_IOERR_WRITE;
    }else{
      storeLastErrno(pFile, 0); /* not a system error */
      return SQLITE_FULL;
    }
  }

  return SQLITE_OK;
}

28727
28728
28729
28730
28731
28732
28733
28734
28735
28736
28737
28738
28739
28740
28741
  SimulateDiskfullError( return SQLITE_FULL );

  assert( pFile );
  OSTRACE(("SYNC    %-3d\n", pFile->h));
  rc = full_fsync(pFile->h, isFullsync, isDataOnly);
  SimulateIOError( rc=1 );
  if( rc ){
    pFile->lastErrno = errno;
    return unixLogError(SQLITE_IOERR_FSYNC, "full_fsync", pFile->zPath);
  }

  /* Also fsync the directory containing the file if the DIRSYNC flag
  ** is set.  This is a one-time occurrence.  Many systems (examples: AIX)
  ** are unable to fsync a directory, so ignore errors on the fsync.
  */







|







28781
28782
28783
28784
28785
28786
28787
28788
28789
28790
28791
28792
28793
28794
28795
  SimulateDiskfullError( return SQLITE_FULL );

  assert( pFile );
  OSTRACE(("SYNC    %-3d\n", pFile->h));
  rc = full_fsync(pFile->h, isFullsync, isDataOnly);
  SimulateIOError( rc=1 );
  if( rc ){
    storeLastErrno(pFile, errno);
    return unixLogError(SQLITE_IOERR_FSYNC, "full_fsync", pFile->zPath);
  }

  /* Also fsync the directory containing the file if the DIRSYNC flag
  ** is set.  This is a one-time occurrence.  Many systems (examples: AIX)
  ** are unable to fsync a directory, so ignore errors on the fsync.
  */
28771
28772
28773
28774
28775
28776
28777
28778
28779
28780
28781
28782
28783
28784
28785
  */
  if( pFile->szChunk>0 ){
    nByte = ((nByte + pFile->szChunk - 1)/pFile->szChunk) * pFile->szChunk;
  }

  rc = robust_ftruncate(pFile->h, nByte);
  if( rc ){
    pFile->lastErrno = errno;
    return unixLogError(SQLITE_IOERR_TRUNCATE, "ftruncate", pFile->zPath);
  }else{
#ifdef SQLITE_DEBUG
    /* If we are doing a normal write to a database file (as opposed to
    ** doing a hot-journal rollback or a write to some file other than a
    ** normal database file) and we truncate the file to zero length,
    ** that effectively updates the change counter.  This might happen







|







28825
28826
28827
28828
28829
28830
28831
28832
28833
28834
28835
28836
28837
28838
28839
  */
  if( pFile->szChunk>0 ){
    nByte = ((nByte + pFile->szChunk - 1)/pFile->szChunk) * pFile->szChunk;
  }

  rc = robust_ftruncate(pFile->h, nByte);
  if( rc ){
    storeLastErrno(pFile, errno);
    return unixLogError(SQLITE_IOERR_TRUNCATE, "ftruncate", pFile->zPath);
  }else{
#ifdef SQLITE_DEBUG
    /* If we are doing a normal write to a database file (as opposed to
    ** doing a hot-journal rollback or a write to some file other than a
    ** normal database file) and we truncate the file to zero length,
    ** that effectively updates the change counter.  This might happen
28811
28812
28813
28814
28815
28816
28817
28818
28819
28820
28821
28822
28823
28824
28825
static int unixFileSize(sqlite3_file *id, i64 *pSize){
  int rc;
  struct stat buf;
  assert( id );
  rc = osFstat(((unixFile*)id)->h, &buf);
  SimulateIOError( rc=1 );
  if( rc!=0 ){
    ((unixFile*)id)->lastErrno = errno;
    return SQLITE_IOERR_FSTAT;
  }
  *pSize = buf.st_size;

  /* When opening a zero-size database, the findInodeInfo() procedure
  ** writes a single byte into that file in order to work around a bug
  ** in the OS-X msdos filesystem.  In order to avoid problems with upper







|







28865
28866
28867
28868
28869
28870
28871
28872
28873
28874
28875
28876
28877
28878
28879
static int unixFileSize(sqlite3_file *id, i64 *pSize){
  int rc;
  struct stat buf;
  assert( id );
  rc = osFstat(((unixFile*)id)->h, &buf);
  SimulateIOError( rc=1 );
  if( rc!=0 ){
    storeLastErrno((unixFile*)id, errno);
    return SQLITE_IOERR_FSTAT;
  }
  *pSize = buf.st_size;

  /* When opening a zero-size database, the findInodeInfo() procedure
  ** writes a single byte into that file in order to work around a bug
  ** in the OS-X msdos filesystem.  In order to avoid problems with upper
28847
28848
28849
28850
28851
28852
28853
28854


28855
28856
28857
28858
28859
28860
28861
** nBytes or larger, this routine is a no-op.
*/
static int fcntlSizeHint(unixFile *pFile, i64 nByte){
  if( pFile->szChunk>0 ){
    i64 nSize;                    /* Required file size */
    struct stat buf;              /* Used to hold return values of fstat() */
   
    if( osFstat(pFile->h, &buf) ) return SQLITE_IOERR_FSTAT;



    nSize = ((nByte+pFile->szChunk-1) / pFile->szChunk) * pFile->szChunk;
    if( nSize>(i64)buf.st_size ){

#if defined(HAVE_POSIX_FALLOCATE) && HAVE_POSIX_FALLOCATE
      /* The code below is handling the return value of osFallocate() 
      ** correctly. posix_fallocate() is defined to "returns zero on success, 







|
>
>







28901
28902
28903
28904
28905
28906
28907
28908
28909
28910
28911
28912
28913
28914
28915
28916
28917
** nBytes or larger, this routine is a no-op.
*/
static int fcntlSizeHint(unixFile *pFile, i64 nByte){
  if( pFile->szChunk>0 ){
    i64 nSize;                    /* Required file size */
    struct stat buf;              /* Used to hold return values of fstat() */
   
    if( osFstat(pFile->h, &buf) ){
      return SQLITE_IOERR_FSTAT;
    }

    nSize = ((nByte+pFile->szChunk-1) / pFile->szChunk) * pFile->szChunk;
    if( nSize>(i64)buf.st_size ){

#if defined(HAVE_POSIX_FALLOCATE) && HAVE_POSIX_FALLOCATE
      /* The code below is handling the return value of osFallocate() 
      ** correctly. posix_fallocate() is defined to "returns zero on success, 
28894
28895
28896
28897
28898
28899
28900
28901
28902
28903
28904
28905
28906
28907
28908
  }

#if SQLITE_MAX_MMAP_SIZE>0
  if( pFile->mmapSizeMax>0 && nByte>pFile->mmapSize ){
    int rc;
    if( pFile->szChunk<=0 ){
      if( robust_ftruncate(pFile->h, nByte) ){
        pFile->lastErrno = errno;
        return unixLogError(SQLITE_IOERR_TRUNCATE, "ftruncate", pFile->zPath);
      }
    }

    rc = unixMapfile(pFile, nByte);
    return rc;
  }







|







28950
28951
28952
28953
28954
28955
28956
28957
28958
28959
28960
28961
28962
28963
28964
  }

#if SQLITE_MAX_MMAP_SIZE>0
  if( pFile->mmapSizeMax>0 && nByte>pFile->mmapSize ){
    int rc;
    if( pFile->szChunk<=0 ){
      if( robust_ftruncate(pFile->h, nByte) ){
        storeLastErrno(pFile, errno);
        return unixLogError(SQLITE_IOERR_TRUNCATE, "ftruncate", pFile->zPath);
      }
    }

    rc = unixMapfile(pFile, nByte);
    return rc;
  }
28936
28937
28938
28939
28940
28941
28942
28943
28944
28945
28946
28947
28948
28949
28950
static int unixFileControl(sqlite3_file *id, int op, void *pArg){
  unixFile *pFile = (unixFile*)id;
  switch( op ){
    case SQLITE_FCNTL_LOCKSTATE: {
      *(int*)pArg = pFile->eFileLock;
      return SQLITE_OK;
    }
    case SQLITE_LAST_ERRNO: {
      *(int*)pArg = pFile->lastErrno;
      return SQLITE_OK;
    }
    case SQLITE_FCNTL_CHUNK_SIZE: {
      pFile->szChunk = *(int *)pArg;
      return SQLITE_OK;
    }







|







28992
28993
28994
28995
28996
28997
28998
28999
29000
29001
29002
29003
29004
29005
29006
static int unixFileControl(sqlite3_file *id, int op, void *pArg){
  unixFile *pFile = (unixFile*)id;
  switch( op ){
    case SQLITE_FCNTL_LOCKSTATE: {
      *(int*)pArg = pFile->eFileLock;
      return SQLITE_OK;
    }
    case SQLITE_FCNTL_LAST_ERRNO: {
      *(int*)pArg = pFile->lastErrno;
      return SQLITE_OK;
    }
    case SQLITE_FCNTL_CHUNK_SIZE: {
      pFile->szChunk = *(int *)pArg;
      return SQLITE_OK;
    }
29005
29006
29007
29008
29009
29010
29011
29012
29013
29014
29015
29016
29017
29018
29019
29020
    */
    case SQLITE_FCNTL_DB_UNCHANGED: {
      ((unixFile*)id)->dbUpdate = 0;
      return SQLITE_OK;
    }
#endif
#if SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__)
    case SQLITE_SET_LOCKPROXYFILE:
    case SQLITE_GET_LOCKPROXYFILE: {
      return proxyFileControl(id,op,pArg);
    }
#endif /* SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__) */
  }
  return SQLITE_NOTFOUND;
}








|
|







29061
29062
29063
29064
29065
29066
29067
29068
29069
29070
29071
29072
29073
29074
29075
29076
    */
    case SQLITE_FCNTL_DB_UNCHANGED: {
      ((unixFile*)id)->dbUpdate = 0;
      return SQLITE_OK;
    }
#endif
#if SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__)
    case SQLITE_FCNTL_SET_LOCKPROXYFILE:
    case SQLITE_FCNTL_GET_LOCKPROXYFILE: {
      return proxyFileControl(id,op,pArg);
    }
#endif /* SQLITE_ENABLE_LOCKING_STYLE && defined(__APPLE__) */
  }
  return SQLITE_NOTFOUND;
}

29411
29412
29413
29414
29415
29416
29417



29418
29419
29420
29421
29422
29423
29424
29425
29426
29427
29428
29429
29430
29431
29432
29433
29434
29435
29436
29437
29438
29439
29440
29441
29442
29443
29444
29445
29446
29447
29448
29449
29450
29451
29452
  ** one if present. Create a new one if necessary.
  */
  unixEnterMutex();
  pInode = pDbFd->pInode;
  pShmNode = pInode->pShmNode;
  if( pShmNode==0 ){
    struct stat sStat;                 /* fstat() info for database file */




    /* Call fstat() to figure out the permissions on the database file. If
    ** a new *-shm file is created, an attempt will be made to create it
    ** with the same permissions.
    */
    if( osFstat(pDbFd->h, &sStat) && pInode->bProcessLock==0 ){
      rc = SQLITE_IOERR_FSTAT;
      goto shm_open_err;
    }

#ifdef SQLITE_SHM_DIRECTORY
    nShmFilename = sizeof(SQLITE_SHM_DIRECTORY) + 31;
#else
    nShmFilename = 6 + (int)strlen(pDbFd->zPath);
#endif
    pShmNode = sqlite3_malloc( sizeof(*pShmNode) + nShmFilename );
    if( pShmNode==0 ){
      rc = SQLITE_NOMEM;
      goto shm_open_err;
    }
    memset(pShmNode, 0, sizeof(*pShmNode)+nShmFilename);
    zShmFilename = pShmNode->zFilename = (char*)&pShmNode[1];
#ifdef SQLITE_SHM_DIRECTORY
    sqlite3_snprintf(nShmFilename, zShmFilename, 
                     SQLITE_SHM_DIRECTORY "/sqlite-shm-%x-%x",
                     (u32)sStat.st_ino, (u32)sStat.st_dev);
#else
    sqlite3_snprintf(nShmFilename, zShmFilename, "%s-shm", pDbFd->zPath);
    sqlite3FileSuffix3(pDbFd->zPath, zShmFilename);
#endif
    pShmNode->h = -1;
    pDbFd->pInode->pShmNode = pShmNode;
    pShmNode->pInode = pDbFd->pInode;
    pShmNode->mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_FAST);
    if( pShmNode->mutex==0 ){







>
>
>













|













|







29467
29468
29469
29470
29471
29472
29473
29474
29475
29476
29477
29478
29479
29480
29481
29482
29483
29484
29485
29486
29487
29488
29489
29490
29491
29492
29493
29494
29495
29496
29497
29498
29499
29500
29501
29502
29503
29504
29505
29506
29507
29508
29509
29510
29511
  ** one if present. Create a new one if necessary.
  */
  unixEnterMutex();
  pInode = pDbFd->pInode;
  pShmNode = pInode->pShmNode;
  if( pShmNode==0 ){
    struct stat sStat;                 /* fstat() info for database file */
#ifndef SQLITE_SHM_DIRECTORY
    const char *zBasePath = pDbFd->zPath;
#endif

    /* Call fstat() to figure out the permissions on the database file. If
    ** a new *-shm file is created, an attempt will be made to create it
    ** with the same permissions.
    */
    if( osFstat(pDbFd->h, &sStat) && pInode->bProcessLock==0 ){
      rc = SQLITE_IOERR_FSTAT;
      goto shm_open_err;
    }

#ifdef SQLITE_SHM_DIRECTORY
    nShmFilename = sizeof(SQLITE_SHM_DIRECTORY) + 31;
#else
    nShmFilename = 6 + (int)strlen(zBasePath);
#endif
    pShmNode = sqlite3_malloc( sizeof(*pShmNode) + nShmFilename );
    if( pShmNode==0 ){
      rc = SQLITE_NOMEM;
      goto shm_open_err;
    }
    memset(pShmNode, 0, sizeof(*pShmNode)+nShmFilename);
    zShmFilename = pShmNode->zFilename = (char*)&pShmNode[1];
#ifdef SQLITE_SHM_DIRECTORY
    sqlite3_snprintf(nShmFilename, zShmFilename, 
                     SQLITE_SHM_DIRECTORY "/sqlite-shm-%x-%x",
                     (u32)sStat.st_ino, (u32)sStat.st_dev);
#else
    sqlite3_snprintf(nShmFilename, zShmFilename, "%s-shm", zBasePath);
    sqlite3FileSuffix3(pDbFd->zPath, zShmFilename);
#endif
    pShmNode->h = -1;
    pDbFd->pInode->pShmNode = pShmNode;
    pShmNode->pInode = pDbFd->pInode;
    pShmNode->mutex = sqlite3_mutex_alloc(SQLITE_MUTEX_FAST);
    if( pShmNode->mutex==0 ){
29831
29832
29833
29834
29835
29836
29837
29838


29839
29840
29841
29842
29843
29844
29845

  /* If pShmNode->nRef has reached 0, then close the underlying
  ** shared-memory file, too */
  unixEnterMutex();
  assert( pShmNode->nRef>0 );
  pShmNode->nRef--;
  if( pShmNode->nRef==0 ){
    if( deleteFlag && pShmNode->h>=0 ) osUnlink(pShmNode->zFilename);


    unixShmPurge(pDbFd);
  }
  unixLeaveMutex();

  return SQLITE_OK;
}








|
>
>







29890
29891
29892
29893
29894
29895
29896
29897
29898
29899
29900
29901
29902
29903
29904
29905
29906

  /* If pShmNode->nRef has reached 0, then close the underlying
  ** shared-memory file, too */
  unixEnterMutex();
  assert( pShmNode->nRef>0 );
  pShmNode->nRef--;
  if( pShmNode->nRef==0 ){
    if( deleteFlag && pShmNode->h>=0 ){
      osUnlink(pShmNode->zFilename);
    }
    unixShmPurge(pDbFd);
  }
  unixLeaveMutex();

  return SQLITE_OK;
}

30108
30109
30110
30111
30112
30113
30114
30115
30116
30117
30118
30119
30120
30121
30122
**
**   *  A constant sqlite3_io_methods object call METHOD that has locking
**      methods CLOSE, LOCK, UNLOCK, CKRESLOCK.
**
**   *  An I/O method finder function called FINDER that returns a pointer
**      to the METHOD object in the previous bullet.
*/
#define IOMETHODS(FINDER, METHOD, VERSION, CLOSE, LOCK, UNLOCK, CKLOCK, SHMMAP) \
static const sqlite3_io_methods METHOD = {                                   \
   VERSION,                    /* iVersion */                                \
   CLOSE,                      /* xClose */                                  \
   unixRead,                   /* xRead */                                   \
   unixWrite,                  /* xWrite */                                  \
   unixTruncate,               /* xTruncate */                               \
   unixSync,                   /* xSync */                                   \







|







30169
30170
30171
30172
30173
30174
30175
30176
30177
30178
30179
30180
30181
30182
30183
**
**   *  A constant sqlite3_io_methods object call METHOD that has locking
**      methods CLOSE, LOCK, UNLOCK, CKRESLOCK.
**
**   *  An I/O method finder function called FINDER that returns a pointer
**      to the METHOD object in the previous bullet.
*/
#define IOMETHODS(FINDER,METHOD,VERSION,CLOSE,LOCK,UNLOCK,CKLOCK,SHMMAP)     \
static const sqlite3_io_methods METHOD = {                                   \
   VERSION,                    /* iVersion */                                \
   CLOSE,                      /* xClose */                                  \
   unixRead,                   /* xRead */                                   \
   unixWrite,                  /* xWrite */                                  \
   unixTruncate,               /* xTruncate */                               \
   unixSync,                   /* xSync */                                   \
30536
30537
30538
30539
30540
30541
30542
30543
30544
30545
30546
30547
30548
30549
30550
        pNew->pInode->aSemName[0] = '\0';
      }
    }
    unixLeaveMutex();
  }
#endif
  
  pNew->lastErrno = 0;
#if OS_VXWORKS
  if( rc!=SQLITE_OK ){
    if( h>=0 ) robust_close(pNew, h, __LINE__);
    h = -1;
    osUnlink(zFilename);
    pNew->ctrlFlags |= UNIXFILE_DELETE;
  }







|







30597
30598
30599
30600
30601
30602
30603
30604
30605
30606
30607
30608
30609
30610
30611
        pNew->pInode->aSemName[0] = '\0';
      }
    }
    unixLeaveMutex();
  }
#endif
  
  storeLastErrno(pNew, 0);
#if OS_VXWORKS
  if( rc!=SQLITE_OK ){
    if( h>=0 ) robust_close(pNew, h, __LINE__);
    h = -1;
    osUnlink(zFilename);
    pNew->ctrlFlags |= UNIXFILE_DELETE;
  }
30984
30985
30986
30987
30988
30989
30990
30991
30992
30993
30994
30995
30996
30997



30998
30999
31000
31001
31002
31003
31004
#endif

  noLock = eType!=SQLITE_OPEN_MAIN_DB;

  
#if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE
  if( fstatfs(fd, &fsInfo) == -1 ){
    ((unixFile*)pFile)->lastErrno = errno;
    robust_close(p, fd, __LINE__);
    return SQLITE_IOERR_ACCESS;
  }
  if (0 == strncmp("msdos", fsInfo.f_fstypename, 5)) {
    ((unixFile*)pFile)->fsFlags |= SQLITE_FSFLAGS_IS_MSDOS;
  }



#endif

  /* Set up appropriate ctrlFlags */
  if( isDelete )                ctrlFlags |= UNIXFILE_DELETE;
  if( isReadonly )              ctrlFlags |= UNIXFILE_RDONLY;
  if( noLock )                  ctrlFlags |= UNIXFILE_NOLOCK;
  if( syncDir )                 ctrlFlags |= UNIXFILE_DIRSYNC;







|






>
>
>







31045
31046
31047
31048
31049
31050
31051
31052
31053
31054
31055
31056
31057
31058
31059
31060
31061
31062
31063
31064
31065
31066
31067
31068
#endif

  noLock = eType!=SQLITE_OPEN_MAIN_DB;

  
#if defined(__APPLE__) || SQLITE_ENABLE_LOCKING_STYLE
  if( fstatfs(fd, &fsInfo) == -1 ){
    storeLastErrno(p, errno);
    robust_close(p, fd, __LINE__);
    return SQLITE_IOERR_ACCESS;
  }
  if (0 == strncmp("msdos", fsInfo.f_fstypename, 5)) {
    ((unixFile*)pFile)->fsFlags |= SQLITE_FSFLAGS_IS_MSDOS;
  }
  if (0 == strncmp("exfat", fsInfo.f_fstypename, 5)) {
    ((unixFile*)pFile)->fsFlags |= SQLITE_FSFLAGS_IS_MSDOS;
  }
#endif

  /* Set up appropriate ctrlFlags */
  if( isDelete )                ctrlFlags |= UNIXFILE_DELETE;
  if( isReadonly )              ctrlFlags |= UNIXFILE_RDONLY;
  if( noLock )                  ctrlFlags |= UNIXFILE_NOLOCK;
  if( syncDir )                 ctrlFlags |= UNIXFILE_DIRSYNC;
31013
31014
31015
31016
31017
31018
31019
31020
31021
31022
31023
31024
31025
31026
31027
31028
31029
31030
31031
31032
31033
31034
31035
31036
31037
31038
31039
    int useProxy = 0;

    /* SQLITE_FORCE_PROXY_LOCKING==1 means force always use proxy, 0 means 
    ** never use proxy, NULL means use proxy for non-local files only.  */
    if( envforce!=NULL ){
      useProxy = atoi(envforce)>0;
    }else{
      if( statfs(zPath, &fsInfo) == -1 ){
        /* In theory, the close(fd) call is sub-optimal. If the file opened
        ** with fd is a database file, and there are other connections open
        ** on that file that are currently holding advisory locks on it,
        ** then the call to close() will cancel those locks. In practice,
        ** we're assuming that statfs() doesn't fail very often. At least
        ** not while other file descriptors opened by the same process on
        ** the same file are working.  */
        p->lastErrno = errno;
        robust_close(p, fd, __LINE__);
        rc = SQLITE_IOERR_ACCESS;
        goto open_finished;
      }
      useProxy = !(fsInfo.f_flags&MNT_LOCAL);
    }
    if( useProxy ){
      rc = fillInUnixFile(pVfs, fd, pFile, zPath, ctrlFlags);
      if( rc==SQLITE_OK ){
        rc = proxyTransformUnixFile((unixFile*)pFile, ":auto:");
        if( rc!=SQLITE_OK ){







<
<
<
<
<
<
<
<
<
<
<
<
<







31077
31078
31079
31080
31081
31082
31083













31084
31085
31086
31087
31088
31089
31090
    int useProxy = 0;

    /* SQLITE_FORCE_PROXY_LOCKING==1 means force always use proxy, 0 means 
    ** never use proxy, NULL means use proxy for non-local files only.  */
    if( envforce!=NULL ){
      useProxy = atoi(envforce)>0;
    }else{













      useProxy = !(fsInfo.f_flags&MNT_LOCAL);
    }
    if( useProxy ){
      rc = fillInUnixFile(pVfs, fd, pFile, zPath, ctrlFlags);
      if( rc==SQLITE_OK ){
        rc = proxyTransformUnixFile((unixFile*)pFile, ":auto:");
        if( rc!=SQLITE_OK ){
31451
31452
31453
31454
31455
31456
31457
31458
31459
31460

31461
31462
31463
31464
31465
31466
31467
**
**
** Using proxy locks
** -----------------
**
** C APIs
**
**  sqlite3_file_control(db, dbname, SQLITE_SET_LOCKPROXYFILE,
**                       <proxy_path> | ":auto:");
**  sqlite3_file_control(db, dbname, SQLITE_GET_LOCKPROXYFILE, &<proxy_path>);

**
**
** SQL pragmas
**
**  PRAGMA [database.]lock_proxy_file=<proxy_path> | :auto:
**  PRAGMA [database.]lock_proxy_file
**







|

|
>







31502
31503
31504
31505
31506
31507
31508
31509
31510
31511
31512
31513
31514
31515
31516
31517
31518
31519
**
**
** Using proxy locks
** -----------------
**
** C APIs
**
**  sqlite3_file_control(db, dbname, SQLITE_FCNTL_SET_LOCKPROXYFILE,
**                       <proxy_path> | ":auto:");
**  sqlite3_file_control(db, dbname, SQLITE_FCNTL_GET_LOCKPROXYFILE,
**                       &<proxy_path>);
**
**
** SQL pragmas
**
**  PRAGMA [database.]lock_proxy_file=<proxy_path> | :auto:
**  PRAGMA [database.]lock_proxy_file
**
31546
31547
31548
31549
31550
31551
31552
31553
31554
31555
31556
31557
31558
31559
31560
31561
31562
31563
31564
31565
31566
31567
31568
31569
31570
31571
31572
31573

31574
31575
31576
31577
31578
31579
31580
**       lock proxy files, only used when LOCKPROXYDIR is not set.
**    
**    
** As mentioned above, when compiled with SQLITE_PREFER_PROXY_LOCKING,
** setting the environment variable SQLITE_FORCE_PROXY_LOCKING to 1 will
** force proxy locking to be used for every database file opened, and 0
** will force automatic proxy locking to be disabled for all database
** files (explicitly calling the SQLITE_SET_LOCKPROXYFILE pragma or
** sqlite_file_control API is not affected by SQLITE_FORCE_PROXY_LOCKING).
*/

/*
** Proxy locking is only available on MacOSX 
*/
#if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE

/*
** The proxyLockingContext has the path and file structures for the remote 
** and local proxy files in it
*/
typedef struct proxyLockingContext proxyLockingContext;
struct proxyLockingContext {
  unixFile *conchFile;         /* Open conch file */
  char *conchFilePath;         /* Name of the conch file */
  unixFile *lockProxy;         /* Open proxy lock file */
  char *lockProxyPath;         /* Name of the proxy lock file */
  char *dbPath;                /* Name of the open file */
  int conchHeld;               /* 1 if the conch is held, -1 if lockless */

  void *oldLockingContext;     /* Original lockingcontext to restore on close */
  sqlite3_io_methods const *pOldMethod;     /* Original I/O methods for close */
};

/* 
** The proxy lock file path for the database at dbPath is written into lPath, 
** which must point to valid, writable memory large enough for a maxLen length







|




















>







31598
31599
31600
31601
31602
31603
31604
31605
31606
31607
31608
31609
31610
31611
31612
31613
31614
31615
31616
31617
31618
31619
31620
31621
31622
31623
31624
31625
31626
31627
31628
31629
31630
31631
31632
31633
**       lock proxy files, only used when LOCKPROXYDIR is not set.
**    
**    
** As mentioned above, when compiled with SQLITE_PREFER_PROXY_LOCKING,
** setting the environment variable SQLITE_FORCE_PROXY_LOCKING to 1 will
** force proxy locking to be used for every database file opened, and 0
** will force automatic proxy locking to be disabled for all database
** files (explicitly calling the SQLITE_FCNTL_SET_LOCKPROXYFILE pragma or
** sqlite_file_control API is not affected by SQLITE_FORCE_PROXY_LOCKING).
*/

/*
** Proxy locking is only available on MacOSX 
*/
#if defined(__APPLE__) && SQLITE_ENABLE_LOCKING_STYLE

/*
** The proxyLockingContext has the path and file structures for the remote 
** and local proxy files in it
*/
typedef struct proxyLockingContext proxyLockingContext;
struct proxyLockingContext {
  unixFile *conchFile;         /* Open conch file */
  char *conchFilePath;         /* Name of the conch file */
  unixFile *lockProxy;         /* Open proxy lock file */
  char *lockProxyPath;         /* Name of the proxy lock file */
  char *dbPath;                /* Name of the open file */
  int conchHeld;               /* 1 if the conch is held, -1 if lockless */
  int nFails;                  /* Number of conch taking failures */
  void *oldLockingContext;     /* Original lockingcontext to restore on close */
  sqlite3_io_methods const *pOldMethod;     /* Original I/O methods for close */
};

/* 
** The proxy lock file path for the database at dbPath is written into lPath, 
** which must point to valid, writable memory large enough for a maxLen length
31755
31756
31757
31758
31759
31760
31761
31762
31763
31764
31765
31766
31767
31768
31769
31770
31771
31772

/* get the host ID via gethostuuid(), pHostID must point to PROXY_HOSTIDLEN 
** bytes of writable memory.
*/
static int proxyGetHostID(unsigned char *pHostID, int *pError){
  assert(PROXY_HOSTIDLEN == sizeof(uuid_t));
  memset(pHostID, 0, PROXY_HOSTIDLEN);
#if defined(__MAX_OS_X_VERSION_MIN_REQUIRED)\
               && __MAC_OS_X_VERSION_MIN_REQUIRED<1050
  {
    static const struct timespec timeout = {1, 0}; /* 1 sec timeout */
    if( gethostuuid(pHostID, &timeout) ){
      int err = errno;
      if( pError ){
        *pError = err;
      }
      return SQLITE_IOERR;
    }







|
|

|







31808
31809
31810
31811
31812
31813
31814
31815
31816
31817
31818
31819
31820
31821
31822
31823
31824
31825

/* get the host ID via gethostuuid(), pHostID must point to PROXY_HOSTIDLEN 
** bytes of writable memory.
*/
static int proxyGetHostID(unsigned char *pHostID, int *pError){
  assert(PROXY_HOSTIDLEN == sizeof(uuid_t));
  memset(pHostID, 0, PROXY_HOSTIDLEN);
# if defined(__APPLE__) && ((__MAC_OS_X_VERSION_MIN_REQUIRED > 1050) || \
                            (__IPHONE_OS_VERSION_MIN_REQUIRED > 2000))
  {
    struct timespec timeout = {1, 0}; /* 1 sec timeout */
    if( gethostuuid(pHostID, &timeout) ){
      int err = errno;
      if( pError ){
        *pError = err;
      }
      return SQLITE_IOERR;
    }
31873
31874
31875
31876
31877
31878
31879
31880
31881
31882
31883
31884
31885
31886
31887
31888
31889
31890
31891
31892
31893
31894
31895
31896
31897
31898
31899
31900
31901
31902
31903
31904
31905
31906
31907
31908
31909
31910
31911
31912
31913
31914
31915
31916
31917
31918
31919
31920
31921
31922
31923
31924
31925
31926
31927
       * 1st try: get the mod time of the conch, wait 0.5s and try again. 
       * 2nd try: fail if the mod time changed or host id is different, wait 
       *           10 sec and try again
       * 3rd try: break the lock unless the mod time has changed.
       */
      struct stat buf;
      if( osFstat(conchFile->h, &buf) ){
        pFile->lastErrno = errno;
        return SQLITE_IOERR_LOCK;
      }
      
      if( nTries==1 ){
        conchModTime = buf.st_mtimespec;
        usleep(500000); /* wait 0.5 sec and try the lock again*/
        continue;  
      }

      assert( nTries>1 );
      if( conchModTime.tv_sec != buf.st_mtimespec.tv_sec || 
         conchModTime.tv_nsec != buf.st_mtimespec.tv_nsec ){
        return SQLITE_BUSY;
      }
      
      if( nTries==2 ){  
        char tBuf[PROXY_MAXCONCHLEN];
        int len = osPread(conchFile->h, tBuf, PROXY_MAXCONCHLEN, 0);
        if( len<0 ){
          pFile->lastErrno = errno;
          return SQLITE_IOERR_LOCK;
        }
        if( len>PROXY_PATHINDEX && tBuf[0]==(char)PROXY_CONCHVERSION){
          /* don't break the lock if the host id doesn't match */
          if( 0!=memcmp(&tBuf[PROXY_HEADERLEN], myHostID, PROXY_HOSTIDLEN) ){
            return SQLITE_BUSY;
          }
        }else{
          /* don't break the lock on short read or a version mismatch */
          return SQLITE_BUSY;
        }
        usleep(10000000); /* wait 10 sec and try the lock again */
        continue; 
      }
      
      assert( nTries==3 );
      if( 0==proxyBreakConchLock(pFile, myHostID) ){
        rc = SQLITE_OK;
        if( lockType==EXCLUSIVE_LOCK ){
          rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, SHARED_LOCK);          
        }
        if( !rc ){
          rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, lockType);
        }
      }
    }
  } while( rc==SQLITE_BUSY && nTries<3 );







|



















|



















|







31926
31927
31928
31929
31930
31931
31932
31933
31934
31935
31936
31937
31938
31939
31940
31941
31942
31943
31944
31945
31946
31947
31948
31949
31950
31951
31952
31953
31954
31955
31956
31957
31958
31959
31960
31961
31962
31963
31964
31965
31966
31967
31968
31969
31970
31971
31972
31973
31974
31975
31976
31977
31978
31979
31980
       * 1st try: get the mod time of the conch, wait 0.5s and try again. 
       * 2nd try: fail if the mod time changed or host id is different, wait 
       *           10 sec and try again
       * 3rd try: break the lock unless the mod time has changed.
       */
      struct stat buf;
      if( osFstat(conchFile->h, &buf) ){
        storeLastErrno(pFile, errno);
        return SQLITE_IOERR_LOCK;
      }
      
      if( nTries==1 ){
        conchModTime = buf.st_mtimespec;
        usleep(500000); /* wait 0.5 sec and try the lock again*/
        continue;  
      }

      assert( nTries>1 );
      if( conchModTime.tv_sec != buf.st_mtimespec.tv_sec || 
         conchModTime.tv_nsec != buf.st_mtimespec.tv_nsec ){
        return SQLITE_BUSY;
      }
      
      if( nTries==2 ){  
        char tBuf[PROXY_MAXCONCHLEN];
        int len = osPread(conchFile->h, tBuf, PROXY_MAXCONCHLEN, 0);
        if( len<0 ){
          storeLastErrno(pFile, errno);
          return SQLITE_IOERR_LOCK;
        }
        if( len>PROXY_PATHINDEX && tBuf[0]==(char)PROXY_CONCHVERSION){
          /* don't break the lock if the host id doesn't match */
          if( 0!=memcmp(&tBuf[PROXY_HEADERLEN], myHostID, PROXY_HOSTIDLEN) ){
            return SQLITE_BUSY;
          }
        }else{
          /* don't break the lock on short read or a version mismatch */
          return SQLITE_BUSY;
        }
        usleep(10000000); /* wait 10 sec and try the lock again */
        continue; 
      }
      
      assert( nTries==3 );
      if( 0==proxyBreakConchLock(pFile, myHostID) ){
        rc = SQLITE_OK;
        if( lockType==EXCLUSIVE_LOCK ){
          rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, SHARED_LOCK);
        }
        if( !rc ){
          rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, lockType);
        }
      }
    }
  } while( rc==SQLITE_BUSY && nTries<3 );
31955
31956
31957
31958
31959
31960
31961
31962
31963
31964
31965
31966
31967
31968
31969
31970
31971
31972
31973
31974
31975
31976
31977
31978
31979
31980
    int forceNewLockPath = 0;
    
    OSTRACE(("TAKECONCH  %d for %s pid=%d\n", conchFile->h,
             (pCtx->lockProxyPath ? pCtx->lockProxyPath : ":auto:"), getpid()));

    rc = proxyGetHostID(myHostID, &pError);
    if( (rc&0xff)==SQLITE_IOERR ){
      pFile->lastErrno = pError;
      goto end_takeconch;
    }
    rc = proxyConchLock(pFile, myHostID, SHARED_LOCK);
    if( rc!=SQLITE_OK ){
      goto end_takeconch;
    }
    /* read the existing conch file */
    readLen = seekAndRead((unixFile*)conchFile, 0, readBuf, PROXY_MAXCONCHLEN);
    if( readLen<0 ){
      /* I/O error: lastErrno set by seekAndRead */
      pFile->lastErrno = conchFile->lastErrno;
      rc = SQLITE_IOERR_READ;
      goto end_takeconch;
    }else if( readLen<=(PROXY_HEADERLEN+PROXY_HOSTIDLEN) || 
             readBuf[0]!=(char)PROXY_CONCHVERSION ){
      /* a short read or version format mismatch means we need to create a new 
      ** conch file. 
      */







|










|







32008
32009
32010
32011
32012
32013
32014
32015
32016
32017
32018
32019
32020
32021
32022
32023
32024
32025
32026
32027
32028
32029
32030
32031
32032
32033
    int forceNewLockPath = 0;
    
    OSTRACE(("TAKECONCH  %d for %s pid=%d\n", conchFile->h,
             (pCtx->lockProxyPath ? pCtx->lockProxyPath : ":auto:"), getpid()));

    rc = proxyGetHostID(myHostID, &pError);
    if( (rc&0xff)==SQLITE_IOERR ){
      storeLastErrno(pFile, pError);
      goto end_takeconch;
    }
    rc = proxyConchLock(pFile, myHostID, SHARED_LOCK);
    if( rc!=SQLITE_OK ){
      goto end_takeconch;
    }
    /* read the existing conch file */
    readLen = seekAndRead((unixFile*)conchFile, 0, readBuf, PROXY_MAXCONCHLEN);
    if( readLen<0 ){
      /* I/O error: lastErrno set by seekAndRead */
      storeLastErrno(pFile, conchFile->lastErrno);
      rc = SQLITE_IOERR_READ;
      goto end_takeconch;
    }else if( readLen<=(PROXY_HEADERLEN+PROXY_HOSTIDLEN) || 
             readBuf[0]!=(char)PROXY_CONCHVERSION ){
      /* a short read or version format mismatch means we need to create a new 
      ** conch file. 
      */
32039
32040
32041
32042
32043
32044
32045
32046
32047
32048
32049
32050
32051
32052
32053
32054
32055

32056
32057
32058
32059
32060
32061
32062
          /* We are trying for an exclusive lock but another thread in this
           ** same process is still holding a shared lock. */
          rc = SQLITE_BUSY;
        } else {          
          rc = proxyConchLock(pFile, myHostID, EXCLUSIVE_LOCK);
        }
      }else{
        rc = conchFile->pMethod->xLock((sqlite3_file*)conchFile, EXCLUSIVE_LOCK);
      }
      if( rc==SQLITE_OK ){
        char writeBuffer[PROXY_MAXCONCHLEN];
        int writeSize = 0;
        
        writeBuffer[0] = (char)PROXY_CONCHVERSION;
        memcpy(&writeBuffer[PROXY_HEADERLEN], myHostID, PROXY_HOSTIDLEN);
        if( pCtx->lockProxyPath!=NULL ){
          strlcpy(&writeBuffer[PROXY_PATHINDEX], pCtx->lockProxyPath, MAXPATHLEN);

        }else{
          strlcpy(&writeBuffer[PROXY_PATHINDEX], tempLockPath, MAXPATHLEN);
        }
        writeSize = PROXY_PATHINDEX + strlen(&writeBuffer[PROXY_PATHINDEX]);
        robust_ftruncate(conchFile->h, writeSize);
        rc = unixWrite((sqlite3_file *)conchFile, writeBuffer, writeSize, 0);
        fsync(conchFile->h);







|








|
>







32092
32093
32094
32095
32096
32097
32098
32099
32100
32101
32102
32103
32104
32105
32106
32107
32108
32109
32110
32111
32112
32113
32114
32115
32116
          /* We are trying for an exclusive lock but another thread in this
           ** same process is still holding a shared lock. */
          rc = SQLITE_BUSY;
        } else {          
          rc = proxyConchLock(pFile, myHostID, EXCLUSIVE_LOCK);
        }
      }else{
        rc = proxyConchLock(pFile, myHostID, EXCLUSIVE_LOCK);
      }
      if( rc==SQLITE_OK ){
        char writeBuffer[PROXY_MAXCONCHLEN];
        int writeSize = 0;
        
        writeBuffer[0] = (char)PROXY_CONCHVERSION;
        memcpy(&writeBuffer[PROXY_HEADERLEN], myHostID, PROXY_HOSTIDLEN);
        if( pCtx->lockProxyPath!=NULL ){
          strlcpy(&writeBuffer[PROXY_PATHINDEX], pCtx->lockProxyPath,
                  MAXPATHLEN);
        }else{
          strlcpy(&writeBuffer[PROXY_PATHINDEX], tempLockPath, MAXPATHLEN);
        }
        writeSize = PROXY_PATHINDEX + strlen(&writeBuffer[PROXY_PATHINDEX]);
        robust_ftruncate(conchFile->h, writeSize);
        rc = unixWrite((sqlite3_file *)conchFile, writeBuffer, writeSize, 0);
        fsync(conchFile->h);
32260
32261
32262
32263
32264
32265
32266
32267

32268
32269
32270
32271
32272
32273
32274
*/
static int proxyGetDbPathForUnixFile(unixFile *pFile, char *dbPath){
#if defined(__APPLE__)
  if( pFile->pMethod == &afpIoMethods ){
    /* afp style keeps a reference to the db path in the filePath field 
    ** of the struct */
    assert( (int)strlen((char*)pFile->lockingContext)<=MAXPATHLEN );
    strlcpy(dbPath, ((afpLockingContext *)pFile->lockingContext)->dbPath, MAXPATHLEN);

  } else
#endif
  if( pFile->pMethod == &dotlockIoMethods ){
    /* dot lock style uses the locking context to store the dot lock
    ** file path */
    int len = strlen((char *)pFile->lockingContext) - strlen(DOTLOCK_SUFFIX);
    memcpy(dbPath, (char *)pFile->lockingContext, len + 1);







|
>







32314
32315
32316
32317
32318
32319
32320
32321
32322
32323
32324
32325
32326
32327
32328
32329
*/
static int proxyGetDbPathForUnixFile(unixFile *pFile, char *dbPath){
#if defined(__APPLE__)
  if( pFile->pMethod == &afpIoMethods ){
    /* afp style keeps a reference to the db path in the filePath field 
    ** of the struct */
    assert( (int)strlen((char*)pFile->lockingContext)<=MAXPATHLEN );
    strlcpy(dbPath, ((afpLockingContext *)pFile->lockingContext)->dbPath,
            MAXPATHLEN);
  } else
#endif
  if( pFile->pMethod == &dotlockIoMethods ){
    /* dot lock style uses the locking context to store the dot lock
    ** file path */
    int len = strlen((char *)pFile->lockingContext) - strlen(DOTLOCK_SUFFIX);
    memcpy(dbPath, (char *)pFile->lockingContext, len + 1);
32373
32374
32375
32376
32377
32378
32379
32380
32381
32382
32383
32384
32385
32386
32387
32388
32389
32390
32391
32392
32393
32394
32395
32396
32397
32398
32399
32400
32401



32402
32403
32404
32405
32406
32407
32408

/*
** This routine handles sqlite3_file_control() calls that are specific
** to proxy locking.
*/
static int proxyFileControl(sqlite3_file *id, int op, void *pArg){
  switch( op ){
    case SQLITE_GET_LOCKPROXYFILE: {
      unixFile *pFile = (unixFile*)id;
      if( pFile->pMethod == &proxyIoMethods ){
        proxyLockingContext *pCtx = (proxyLockingContext*)pFile->lockingContext;
        proxyTakeConch(pFile);
        if( pCtx->lockProxyPath ){
          *(const char **)pArg = pCtx->lockProxyPath;
        }else{
          *(const char **)pArg = ":auto: (not held)";
        }
      } else {
        *(const char **)pArg = NULL;
      }
      return SQLITE_OK;
    }
    case SQLITE_SET_LOCKPROXYFILE: {
      unixFile *pFile = (unixFile*)id;
      int rc = SQLITE_OK;
      int isProxyStyle = (pFile->pMethod == &proxyIoMethods);
      if( pArg==NULL || (const char *)pArg==0 ){
        if( isProxyStyle ){
          /* turn off proxy locking - not supported */



          rc = SQLITE_ERROR /*SQLITE_PROTOCOL? SQLITE_MISUSE?*/;
        }else{
          /* turn off proxy locking - already off - NOOP */
          rc = SQLITE_OK;
        }
      }else{
        const char *proxyPath = (const char *)pArg;







|














|





|
>
>
>







32428
32429
32430
32431
32432
32433
32434
32435
32436
32437
32438
32439
32440
32441
32442
32443
32444
32445
32446
32447
32448
32449
32450
32451
32452
32453
32454
32455
32456
32457
32458
32459
32460
32461
32462
32463
32464
32465
32466

/*
** This routine handles sqlite3_file_control() calls that are specific
** to proxy locking.
*/
static int proxyFileControl(sqlite3_file *id, int op, void *pArg){
  switch( op ){
    case SQLITE_FCNTL_GET_LOCKPROXYFILE: {
      unixFile *pFile = (unixFile*)id;
      if( pFile->pMethod == &proxyIoMethods ){
        proxyLockingContext *pCtx = (proxyLockingContext*)pFile->lockingContext;
        proxyTakeConch(pFile);
        if( pCtx->lockProxyPath ){
          *(const char **)pArg = pCtx->lockProxyPath;
        }else{
          *(const char **)pArg = ":auto: (not held)";
        }
      } else {
        *(const char **)pArg = NULL;
      }
      return SQLITE_OK;
    }
    case SQLITE_FCNTL_SET_LOCKPROXYFILE: {
      unixFile *pFile = (unixFile*)id;
      int rc = SQLITE_OK;
      int isProxyStyle = (pFile->pMethod == &proxyIoMethods);
      if( pArg==NULL || (const char *)pArg==0 ){
        if( isProxyStyle ){
          /* turn off proxy locking - not supported.  If support is added for
          ** switching proxy locking mode off then it will need to fail if
          ** the journal mode is WAL mode. 
          */
          rc = SQLITE_ERROR /*SQLITE_PROTOCOL? SQLITE_MISUSE?*/;
        }else{
          /* turn off proxy locking - already off - NOOP */
          rc = SQLITE_OK;
        }
      }else{
        const char *proxyPath = (const char *)pArg;
33963
33964
33965
33966
33967
33968
33969

















33970
33971
33972
33973
33974
33975
33976
#define osInterlockedCompareExchange InterlockedCompareExchange
#else
  { "InterlockedCompareExchange", (SYSCALL)InterlockedCompareExchange, 0 },

#define osInterlockedCompareExchange ((LONG(WINAPI*)(LONG \
        SQLITE_WIN32_VOLATILE*, LONG,LONG))aSyscall[76].pCurrent)
#endif /* defined(InterlockedCompareExchange) */


















}; /* End of the overrideable system calls */

/*
** This is the xSetSystemCall() method of sqlite3_vfs for all of the
** "win32" VFSes.  Return SQLITE_OK opon successfully updating the
** system call pointer, or SQLITE_NOTFOUND if there is no configurable







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







34021
34022
34023
34024
34025
34026
34027
34028
34029
34030
34031
34032
34033
34034
34035
34036
34037
34038
34039
34040
34041
34042
34043
34044
34045
34046
34047
34048
34049
34050
34051
#define osInterlockedCompareExchange InterlockedCompareExchange
#else
  { "InterlockedCompareExchange", (SYSCALL)InterlockedCompareExchange, 0 },

#define osInterlockedCompareExchange ((LONG(WINAPI*)(LONG \
        SQLITE_WIN32_VOLATILE*, LONG,LONG))aSyscall[76].pCurrent)
#endif /* defined(InterlockedCompareExchange) */

#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && SQLITE_WIN32_USE_UUID
  { "UuidCreate",               (SYSCALL)UuidCreate,             0 },
#else
  { "UuidCreate",               (SYSCALL)0,                      0 },
#endif

#define osUuidCreate ((RPC_STATUS(RPC_ENTRY*)(UUID*))aSyscall[77].pCurrent)

#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && SQLITE_WIN32_USE_UUID
  { "UuidCreateSequential",     (SYSCALL)UuidCreateSequential,   0 },
#else
  { "UuidCreateSequential",     (SYSCALL)0,                      0 },
#endif

#define osUuidCreateSequential \
        ((RPC_STATUS(RPC_ENTRY*)(UUID*))aSyscall[78].pCurrent)

}; /* End of the overrideable system calls */

/*
** This is the xSetSystemCall() method of sqlite3_vfs for all of the
** "win32" VFSes.  Return SQLITE_OK opon successfully updating the
** system call pointer, or SQLITE_NOTFOUND if there is no configurable
36712
36713
36714
36715
36716
36717
36718
36719
36720
36721
36722
36723
36724
36725
36726
36727
36728
36729
36730
36731
36732
36733
36734
36735
  sqlite3_file *fd,               /* Handle open on database file */
  int iRegion,                    /* Region to retrieve */
  int szRegion,                   /* Size of regions */
  int isWrite,                    /* True to extend file if necessary */
  void volatile **pp              /* OUT: Mapped memory */
){
  winFile *pDbFd = (winFile*)fd;
  winShm *p = pDbFd->pShm;
  winShmNode *pShmNode;
  int rc = SQLITE_OK;

  if( !p ){
    rc = winOpenSharedMemory(pDbFd);
    if( rc!=SQLITE_OK ) return rc;
    p = pDbFd->pShm;
  }
  pShmNode = p->pShmNode;

  sqlite3_mutex_enter(pShmNode->mutex);
  assert( szRegion==pShmNode->szRegion || pShmNode->nRegion==0 );

  if( pShmNode->nRegion<=iRegion ){
    struct ShmRegion *apNew;           /* New aRegion[] array */
    int nByte = (iRegion+1)*szRegion;  /* Minimum required file size */







|



|


|

|







36787
36788
36789
36790
36791
36792
36793
36794
36795
36796
36797
36798
36799
36800
36801
36802
36803
36804
36805
36806
36807
36808
36809
36810
  sqlite3_file *fd,               /* Handle open on database file */
  int iRegion,                    /* Region to retrieve */
  int szRegion,                   /* Size of regions */
  int isWrite,                    /* True to extend file if necessary */
  void volatile **pp              /* OUT: Mapped memory */
){
  winFile *pDbFd = (winFile*)fd;
  winShm *pShm = pDbFd->pShm;
  winShmNode *pShmNode;
  int rc = SQLITE_OK;

  if( !pShm ){
    rc = winOpenSharedMemory(pDbFd);
    if( rc!=SQLITE_OK ) return rc;
    pShm = pDbFd->pShm;
  }
  pShmNode = pShm->pShmNode;

  sqlite3_mutex_enter(pShmNode->mutex);
  assert( szRegion==pShmNode->szRegion || pShmNode->nRegion==0 );

  if( pShmNode->nRegion<=iRegion ){
    struct ShmRegion *apNew;           /* New aRegion[] array */
    int nByte = (iRegion+1)*szRegion;  /* Minimum required file size */
38242
38243
38244
38245
38246
38247
38248
















38249
38250
38251
38252
38253
38254
38255
#endif
  if( sizeof(LARGE_INTEGER)<=nBuf-n ){
    LARGE_INTEGER i;
    osQueryPerformanceCounter(&i);
    memcpy(&zBuf[n], &i, sizeof(i));
    n += sizeof(i);
  }
















#endif
  return n;
}


/*
** Sleep for a little while.  Return the amount of time slept.







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







38317
38318
38319
38320
38321
38322
38323
38324
38325
38326
38327
38328
38329
38330
38331
38332
38333
38334
38335
38336
38337
38338
38339
38340
38341
38342
38343
38344
38345
38346
#endif
  if( sizeof(LARGE_INTEGER)<=nBuf-n ){
    LARGE_INTEGER i;
    osQueryPerformanceCounter(&i);
    memcpy(&zBuf[n], &i, sizeof(i));
    n += sizeof(i);
  }
#endif
#if !SQLITE_OS_WINCE && !SQLITE_OS_WINRT && SQLITE_WIN32_USE_UUID
  if( sizeof(UUID)<=nBuf-n ){
    UUID id;
    memset(&id, 0, sizeof(UUID));
    osUuidCreate(&id);
    memcpy(zBuf, &id, sizeof(UUID));
    n += sizeof(UUID);
  }
  if( sizeof(UUID)<=nBuf-n ){
    UUID id;
    memset(&id, 0, sizeof(UUID));
    osUuidCreateSequential(&id);
    memcpy(zBuf, &id, sizeof(UUID));
    n += sizeof(UUID);
  }
#endif
  return n;
}


/*
** Sleep for a little while.  Return the amount of time slept.
38420
38421
38422
38423
38424
38425
38426
38427
38428
38429
38430
38431
38432
38433
38434
    winGetSystemCall,    /* xGetSystemCall */
    winNextSystemCall,   /* xNextSystemCall */
  };
#endif

  /* Double-check that the aSyscall[] array has been constructed
  ** correctly.  See ticket [bb3a86e890c8e96ab] */
  assert( ArraySize(aSyscall)==77 );

  /* get memory map allocation granularity */
  memset(&winSysInfo, 0, sizeof(SYSTEM_INFO));
#if SQLITE_OS_WINRT
  osGetNativeSystemInfo(&winSysInfo);
#else
  osGetSystemInfo(&winSysInfo);







|







38511
38512
38513
38514
38515
38516
38517
38518
38519
38520
38521
38522
38523
38524
38525
    winGetSystemCall,    /* xGetSystemCall */
    winNextSystemCall,   /* xNextSystemCall */
  };
#endif

  /* Double-check that the aSyscall[] array has been constructed
  ** correctly.  See ticket [bb3a86e890c8e96ab] */
  assert( ArraySize(aSyscall)==79 );

  /* get memory map allocation granularity */
  memset(&winSysInfo, 0, sizeof(SYSTEM_INFO));
#if SQLITE_OS_WINRT
  osGetNativeSystemInfo(&winSysInfo);
#else
  osGetSystemInfo(&winSysInfo);
52087
52088
52089
52090
52091
52092
52093



52094
52095
52096
52097
52098
52099
52100
#ifndef SQLITE_OMIT_AUTOVACUUM
  u8 autoVacuum;        /* True if auto-vacuum is enabled */
  u8 incrVacuum;        /* True if incr-vacuum is enabled */
  u8 bDoTruncate;       /* True to truncate db on commit */
#endif
  u8 inTransaction;     /* Transaction state */
  u8 max1bytePayload;   /* Maximum first byte of cell for a 1-byte payload */



  u16 btsFlags;         /* Boolean parameters.  See BTS_* macros below */
  u16 maxLocal;         /* Maximum local payload in non-LEAFDATA tables */
  u16 minLocal;         /* Minimum local payload in non-LEAFDATA tables */
  u16 maxLeaf;          /* Maximum local payload in a LEAFDATA table */
  u16 minLeaf;          /* Minimum local payload in a LEAFDATA table */
  u32 pageSize;         /* Total number of bytes on a page */
  u32 usableSize;       /* Number of usable bytes on each page */







>
>
>







52178
52179
52180
52181
52182
52183
52184
52185
52186
52187
52188
52189
52190
52191
52192
52193
52194
#ifndef SQLITE_OMIT_AUTOVACUUM
  u8 autoVacuum;        /* True if auto-vacuum is enabled */
  u8 incrVacuum;        /* True if incr-vacuum is enabled */
  u8 bDoTruncate;       /* True to truncate db on commit */
#endif
  u8 inTransaction;     /* Transaction state */
  u8 max1bytePayload;   /* Maximum first byte of cell for a 1-byte payload */
#ifdef SQLITE_HAS_CODEC
  u8 optimalReserve;    /* Desired amount of reserved space per page */
#endif
  u16 btsFlags;         /* Boolean parameters.  See BTS_* macros below */
  u16 maxLocal;         /* Maximum local payload in non-LEAFDATA tables */
  u16 minLocal;         /* Minimum local payload in non-LEAFDATA tables */
  u16 maxLeaf;          /* Maximum local payload in a LEAFDATA table */
  u16 minLeaf;          /* Minimum local payload in a LEAFDATA table */
  u32 pageSize;         /* Total number of bytes on a page */
  u32 usableSize;       /* Number of usable bytes on each page */
52809
52810
52811
52812
52813
52814
52815






52816
52817
52818
52819
52820
52821
52822
  ** written. For index b-trees, it is the root page of the associated
  ** table.  */
  if( isIndex ){
    HashElem *p;
    for(p=sqliteHashFirst(&pSchema->idxHash); p; p=sqliteHashNext(p)){
      Index *pIdx = (Index *)sqliteHashData(p);
      if( pIdx->tnum==(int)iRoot ){






        iTab = pIdx->pTable->tnum;
      }
    }
  }else{
    iTab = iRoot;
  }








>
>
>
>
>
>







52903
52904
52905
52906
52907
52908
52909
52910
52911
52912
52913
52914
52915
52916
52917
52918
52919
52920
52921
52922
  ** written. For index b-trees, it is the root page of the associated
  ** table.  */
  if( isIndex ){
    HashElem *p;
    for(p=sqliteHashFirst(&pSchema->idxHash); p; p=sqliteHashNext(p)){
      Index *pIdx = (Index *)sqliteHashData(p);
      if( pIdx->tnum==(int)iRoot ){
        if( iTab ){
          /* Two or more indexes share the same root page.  There must
          ** be imposter tables.  So just return true.  The assert is not
          ** useful in that case. */
          return 1;
        }
        iTab = pIdx->pTable->tnum;
      }
    }
  }else{
    iTab = iRoot;
  }

55033
55034
55035
55036
55037
55038
55039



55040
55041
55042
55043
55044
55045
55046
** and autovacuum mode can no longer be changed.
*/
SQLITE_PRIVATE int sqlite3BtreeSetPageSize(Btree *p, int pageSize, int nReserve, int iFix){
  int rc = SQLITE_OK;
  BtShared *pBt = p->pBt;
  assert( nReserve>=-1 && nReserve<=255 );
  sqlite3BtreeEnter(p);



  if( pBt->btsFlags & BTS_PAGESIZE_FIXED ){
    sqlite3BtreeLeave(p);
    return SQLITE_READONLY;
  }
  if( nReserve<0 ){
    nReserve = pBt->pageSize - pBt->usableSize;
  }







>
>
>







55133
55134
55135
55136
55137
55138
55139
55140
55141
55142
55143
55144
55145
55146
55147
55148
55149
** and autovacuum mode can no longer be changed.
*/
SQLITE_PRIVATE int sqlite3BtreeSetPageSize(Btree *p, int pageSize, int nReserve, int iFix){
  int rc = SQLITE_OK;
  BtShared *pBt = p->pBt;
  assert( nReserve>=-1 && nReserve<=255 );
  sqlite3BtreeEnter(p);
#if SQLITE_HAS_CODEC
  if( nReserve>pBt->optimalReserve ) pBt->optimalReserve = (u8)nReserve;
#endif
  if( pBt->btsFlags & BTS_PAGESIZE_FIXED ){
    sqlite3BtreeLeave(p);
    return SQLITE_READONLY;
  }
  if( nReserve<0 ){
    nReserve = pBt->pageSize - pBt->usableSize;
  }
55062
55063
55064
55065
55066
55067
55068
55069
55070
55071
55072
55073
55074
55075
55076
55077
55078
55079
55080
55081

55082
55083

55084
55085
55086
55087
55088
55089
55090
55091




55092
55093
55094
55095


55096

55097
55098
55099

55100
55101
55102
55103
55104
55105
55106
/*
** Return the currently defined page size
*/
SQLITE_PRIVATE int sqlite3BtreeGetPageSize(Btree *p){
  return p->pBt->pageSize;
}

#if defined(SQLITE_HAS_CODEC) || defined(SQLITE_DEBUG)
/*
** This function is similar to sqlite3BtreeGetReserve(), except that it
** may only be called if it is guaranteed that the b-tree mutex is already
** held.
**
** This is useful in one special case in the backup API code where it is
** known that the shared b-tree mutex is held, but the mutex on the 
** database handle that owns *p is not. In this case if sqlite3BtreeEnter()
** were to be called, it might collide with some other operation on the
** database handle that owns *p, causing undefined behavior.
*/
SQLITE_PRIVATE int sqlite3BtreeGetReserveNoMutex(Btree *p){

  assert( sqlite3_mutex_held(p->pBt->mutex) );
  return p->pBt->pageSize - p->pBt->usableSize;

}
#endif /* SQLITE_HAS_CODEC || SQLITE_DEBUG */

#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM)
/*
** Return the number of bytes of space at the end of every page that
** are intentually left unused.  This is the "reserved" space that is
** sometimes used by extensions.




*/
SQLITE_PRIVATE int sqlite3BtreeGetReserve(Btree *p){
  int n;
  sqlite3BtreeEnter(p);


  n = p->pBt->pageSize - p->pBt->usableSize;

  sqlite3BtreeLeave(p);
  return n;
}


/*
** Set the maximum page count for a database if mxPage is positive.
** No changes are made if mxPage is 0 or negative.
** Regardless of the value of mxPage, return the maximum page count.
*/
SQLITE_PRIVATE int sqlite3BtreeMaxPageCount(Btree *p, int mxPage){







<












>

|
>

<

<




>
>
>
>

|


>
>
|
>



>







55165
55166
55167
55168
55169
55170
55171

55172
55173
55174
55175
55176
55177
55178
55179
55180
55181
55182
55183
55184
55185
55186
55187
55188

55189

55190
55191
55192
55193
55194
55195
55196
55197
55198
55199
55200
55201
55202
55203
55204
55205
55206
55207
55208
55209
55210
55211
55212
55213
55214
55215
55216
/*
** Return the currently defined page size
*/
SQLITE_PRIVATE int sqlite3BtreeGetPageSize(Btree *p){
  return p->pBt->pageSize;
}


/*
** This function is similar to sqlite3BtreeGetReserve(), except that it
** may only be called if it is guaranteed that the b-tree mutex is already
** held.
**
** This is useful in one special case in the backup API code where it is
** known that the shared b-tree mutex is held, but the mutex on the 
** database handle that owns *p is not. In this case if sqlite3BtreeEnter()
** were to be called, it might collide with some other operation on the
** database handle that owns *p, causing undefined behavior.
*/
SQLITE_PRIVATE int sqlite3BtreeGetReserveNoMutex(Btree *p){
  int n;
  assert( sqlite3_mutex_held(p->pBt->mutex) );
  n = p->pBt->pageSize - p->pBt->usableSize;
  return n;
}



/*
** Return the number of bytes of space at the end of every page that
** are intentually left unused.  This is the "reserved" space that is
** sometimes used by extensions.
**
** If SQLITE_HAS_MUTEX is defined then the number returned is the
** greater of the current reserved space and the maximum requested
** reserve space.
*/
SQLITE_PRIVATE int sqlite3BtreeGetOptimalReserve(Btree *p){
  int n;
  sqlite3BtreeEnter(p);
  n = sqlite3BtreeGetReserveNoMutex(p);
#ifdef SQLITE_HAS_CODEC
  if( n<p->pBt->optimalReserve ) n = p->pBt->optimalReserve;
#endif
  sqlite3BtreeLeave(p);
  return n;
}


/*
** Set the maximum page count for a database if mxPage is positive.
** No changes are made if mxPage is 0 or negative.
** Regardless of the value of mxPage, return the maximum page count.
*/
SQLITE_PRIVATE int sqlite3BtreeMaxPageCount(Btree *p, int mxPage){
55124
55125
55126
55127
55128
55129
55130
55131
55132
55133
55134
55135
55136
55137
55138
    p->pBt->btsFlags &= ~BTS_SECURE_DELETE;
    if( newFlag ) p->pBt->btsFlags |= BTS_SECURE_DELETE;
  } 
  b = (p->pBt->btsFlags & BTS_SECURE_DELETE)!=0;
  sqlite3BtreeLeave(p);
  return b;
}
#endif /* !defined(SQLITE_OMIT_PAGER_PRAGMAS) || !defined(SQLITE_OMIT_VACUUM) */

/*
** Change the 'auto-vacuum' property of the database. If the 'autoVacuum'
** parameter is non-zero, then auto-vacuum mode is enabled. If zero, it
** is disabled. The default value for the auto-vacuum property is 
** determined by the SQLITE_DEFAULT_AUTOVACUUM macro.
*/







<







55234
55235
55236
55237
55238
55239
55240

55241
55242
55243
55244
55245
55246
55247
    p->pBt->btsFlags &= ~BTS_SECURE_DELETE;
    if( newFlag ) p->pBt->btsFlags |= BTS_SECURE_DELETE;
  } 
  b = (p->pBt->btsFlags & BTS_SECURE_DELETE)!=0;
  sqlite3BtreeLeave(p);
  return b;
}


/*
** Change the 'auto-vacuum' property of the database. If the 'autoVacuum'
** parameter is non-zero, then auto-vacuum mode is enabled. If zero, it
** is disabled. The default value for the auto-vacuum property is 
** determined by the SQLITE_DEFAULT_AUTOVACUUM macro.
*/
62031
62032
62033
62034
62035
62036
62037
62038
62039
62040
62041
62042
62043
62044
62045
  const int nCopy = MIN(nSrcPgsz, nDestPgsz);
  const i64 iEnd = (i64)iSrcPg*(i64)nSrcPgsz;
#ifdef SQLITE_HAS_CODEC
  /* Use BtreeGetReserveNoMutex() for the source b-tree, as although it is
  ** guaranteed that the shared-mutex is held by this thread, handle
  ** p->pSrc may not actually be the owner.  */
  int nSrcReserve = sqlite3BtreeGetReserveNoMutex(p->pSrc);
  int nDestReserve = sqlite3BtreeGetReserve(p->pDest);
#endif
  int rc = SQLITE_OK;
  i64 iOff;

  assert( sqlite3BtreeGetReserveNoMutex(p->pSrc)>=0 );
  assert( p->bDestLocked );
  assert( !isFatalError(p->rc) );







|







62140
62141
62142
62143
62144
62145
62146
62147
62148
62149
62150
62151
62152
62153
62154
  const int nCopy = MIN(nSrcPgsz, nDestPgsz);
  const i64 iEnd = (i64)iSrcPg*(i64)nSrcPgsz;
#ifdef SQLITE_HAS_CODEC
  /* Use BtreeGetReserveNoMutex() for the source b-tree, as although it is
  ** guaranteed that the shared-mutex is held by this thread, handle
  ** p->pSrc may not actually be the owner.  */
  int nSrcReserve = sqlite3BtreeGetReserveNoMutex(p->pSrc);
  int nDestReserve = sqlite3BtreeGetOptimalReserve(p->pDest);
#endif
  int rc = SQLITE_OK;
  i64 iOff;

  assert( sqlite3BtreeGetReserveNoMutex(p->pSrc)>=0 );
  assert( p->bDestLocked );
  assert( !isFatalError(p->rc) );
67521
67522
67523
67524
67525
67526
67527

67528
67529
67530
67531
67532
67533
67534
67535
  u32 szHdr;
  u32 idx;
  u32 notUsed;
  const unsigned char *aKey = (const unsigned char*)pKey;

  if( CORRUPT_DB ) return;
  idx = getVarint32(aKey, szHdr);

  assert( szHdr<=nKey );
  while( idx<szHdr ){
    idx += getVarint32(aKey+idx, notUsed);
    nField++;
  }
  assert( nField <= pKeyInfo->nField+pKeyInfo->nXField );
}
#else







>
|







67630
67631
67632
67633
67634
67635
67636
67637
67638
67639
67640
67641
67642
67643
67644
67645
  u32 szHdr;
  u32 idx;
  u32 notUsed;
  const unsigned char *aKey = (const unsigned char*)pKey;

  if( CORRUPT_DB ) return;
  idx = getVarint32(aKey, szHdr);
  assert( nKey>=0 );
  assert( szHdr<=(u32)nKey );
  while( idx<szHdr ){
    idx += getVarint32(aKey+idx, notUsed);
    nField++;
  }
  assert( nField <= pKeyInfo->nField+pKeyInfo->nXField );
}
#else
68698
68699
68700
68701
68702
68703
68704



68705
68706
68707
68708
68709
68710
68711
SQLITE_API void sqlite3_result_zeroblob(sqlite3_context *pCtx, int n){
  assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) );
  sqlite3VdbeMemSetZeroBlob(pCtx->pOut, n);
}
SQLITE_API void sqlite3_result_error_code(sqlite3_context *pCtx, int errCode){
  pCtx->isError = errCode;
  pCtx->fErrorOrAux = 1;



  if( pCtx->pOut->flags & MEM_Null ){
    sqlite3VdbeMemSetStr(pCtx->pOut, sqlite3ErrStr(errCode), -1, 
                         SQLITE_UTF8, SQLITE_STATIC);
  }
}

/* Force an SQLITE_TOOBIG error. */







>
>
>







68808
68809
68810
68811
68812
68813
68814
68815
68816
68817
68818
68819
68820
68821
68822
68823
68824
SQLITE_API void sqlite3_result_zeroblob(sqlite3_context *pCtx, int n){
  assert( sqlite3_mutex_held(pCtx->pOut->db->mutex) );
  sqlite3VdbeMemSetZeroBlob(pCtx->pOut, n);
}
SQLITE_API void sqlite3_result_error_code(sqlite3_context *pCtx, int errCode){
  pCtx->isError = errCode;
  pCtx->fErrorOrAux = 1;
#ifdef SQLITE_DEBUG
  pCtx->pVdbe->rcApp = errCode;
#endif
  if( pCtx->pOut->flags & MEM_Null ){
    sqlite3VdbeMemSetStr(pCtx->pOut, sqlite3ErrStr(errCode), -1, 
                         SQLITE_UTF8, SQLITE_STATIC);
  }
}

/* Force an SQLITE_TOOBIG error. */
68778
68779
68780
68781
68782
68783
68784
68785
68786
68787
68788
68789
68790
68791
68792
    ** returns, and those were broken by the automatic-reset change.  As a
    ** a work-around, the SQLITE_OMIT_AUTORESET compile-time restores the
    ** legacy behavior of returning SQLITE_MISUSE for cases where the 
    ** previous sqlite3_step() returned something other than a SQLITE_LOCKED
    ** or SQLITE_BUSY error.
    */
#ifdef SQLITE_OMIT_AUTORESET
    if( p->rc==SQLITE_BUSY || p->rc==SQLITE_LOCKED ){
      sqlite3_reset((sqlite3_stmt*)p);
    }else{
      return SQLITE_MISUSE_BKPT;
    }
#else
    sqlite3_reset((sqlite3_stmt*)p);
#endif







|







68891
68892
68893
68894
68895
68896
68897
68898
68899
68900
68901
68902
68903
68904
68905
    ** returns, and those were broken by the automatic-reset change.  As a
    ** a work-around, the SQLITE_OMIT_AUTORESET compile-time restores the
    ** legacy behavior of returning SQLITE_MISUSE for cases where the 
    ** previous sqlite3_step() returned something other than a SQLITE_LOCKED
    ** or SQLITE_BUSY error.
    */
#ifdef SQLITE_OMIT_AUTORESET
    if( (rc = p->rc&0xff)==SQLITE_BUSY || rc==SQLITE_LOCKED ){
      sqlite3_reset((sqlite3_stmt*)p);
    }else{
      return SQLITE_MISUSE_BKPT;
    }
#else
    sqlite3_reset((sqlite3_stmt*)p);
#endif
68824
68825
68826
68827
68828
68829
68830



68831
68832
68833
68834
68835
68836
68837
#endif

    db->nVdbeActive++;
    if( p->readOnly==0 ) db->nVdbeWrite++;
    if( p->bIsReader ) db->nVdbeRead++;
    p->pc = 0;
  }



#ifndef SQLITE_OMIT_EXPLAIN
  if( p->explain ){
    rc = sqlite3VdbeList(p);
  }else
#endif /* SQLITE_OMIT_EXPLAIN */
  {
    db->nVdbeExec++;







>
>
>







68937
68938
68939
68940
68941
68942
68943
68944
68945
68946
68947
68948
68949
68950
68951
68952
68953
#endif

    db->nVdbeActive++;
    if( p->readOnly==0 ) db->nVdbeWrite++;
    if( p->bIsReader ) db->nVdbeRead++;
    p->pc = 0;
  }
#ifdef SQLITE_DEBUG
  p->rcApp = SQLITE_OK;
#endif
#ifndef SQLITE_OMIT_EXPLAIN
  if( p->explain ){
    rc = sqlite3VdbeList(p);
  }else
#endif /* SQLITE_OMIT_EXPLAIN */
  {
    db->nVdbeExec++;
68868
68869
68870
68871
68872
68873
68874
68875
68876
68877
68878
68879
68880
68881
68882
  ** be one of the values in the first assert() below. Variable p->rc 
  ** contains the value that would be returned if sqlite3_finalize() 
  ** were called on statement p.
  */
  assert( rc==SQLITE_ROW  || rc==SQLITE_DONE   || rc==SQLITE_ERROR 
       || rc==SQLITE_BUSY || rc==SQLITE_MISUSE
  );
  assert( p->rc!=SQLITE_ROW && p->rc!=SQLITE_DONE );
  if( p->isPrepareV2 && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){
    /* If this statement was prepared using sqlite3_prepare_v2(), and an
    ** error has occurred, then return the error code in p->rc to the
    ** caller. Set the error code in the database handle to the same value.
    */ 
    rc = sqlite3VdbeTransferError(p);
  }







|







68984
68985
68986
68987
68988
68989
68990
68991
68992
68993
68994
68995
68996
68997
68998
  ** be one of the values in the first assert() below. Variable p->rc 
  ** contains the value that would be returned if sqlite3_finalize() 
  ** were called on statement p.
  */
  assert( rc==SQLITE_ROW  || rc==SQLITE_DONE   || rc==SQLITE_ERROR 
       || rc==SQLITE_BUSY || rc==SQLITE_MISUSE
  );
  assert( (p->rc!=SQLITE_ROW && p->rc!=SQLITE_DONE) || p->rc==p->rcApp );
  if( p->isPrepareV2 && rc!=SQLITE_ROW && rc!=SQLITE_DONE ){
    /* If this statement was prepared using sqlite3_prepare_v2(), and an
    ** error has occurred, then return the error code in p->rc to the
    ** caller. Set the error code in the database handle to the same value.
    */ 
    rc = sqlite3VdbeTransferError(p);
  }
76808
76809
76810
76811
76812
76813
76814
76815
76816
76817
76818
76819
76820






76821
76822
76823
76824
76825
76826
76827
  int rc = SQLITE_OK;
  char *zErr = 0;
  Table *pTab;
  Parse *pParse = 0;
  Incrblob *pBlob = 0;

#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) || ppBlob==0 || zTable==0 ){
    return SQLITE_MISUSE_BKPT;
  }
#endif
  flags = !!flags;                /* flags = (flags ? 1 : 0); */
  *ppBlob = 0;







  sqlite3_mutex_enter(db->mutex);

  pBlob = (Incrblob *)sqlite3DbMallocZero(db, sizeof(Incrblob));
  if( !pBlob ) goto blob_open_out;
  pParse = sqlite3StackAllocRaw(db, sizeof(*pParse));
  if( !pParse ) goto blob_open_out;







|



<

>
>
>
>
>
>







76924
76925
76926
76927
76928
76929
76930
76931
76932
76933
76934

76935
76936
76937
76938
76939
76940
76941
76942
76943
76944
76945
76946
76947
76948
  int rc = SQLITE_OK;
  char *zErr = 0;
  Table *pTab;
  Parse *pParse = 0;
  Incrblob *pBlob = 0;

#ifdef SQLITE_ENABLE_API_ARMOR
  if( ppBlob==0 ){
    return SQLITE_MISUSE_BKPT;
  }
#endif

  *ppBlob = 0;
#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) || zTable==0 ){
    return SQLITE_MISUSE_BKPT;
  }
#endif
  flags = !!flags;                /* flags = (flags ? 1 : 0); */

  sqlite3_mutex_enter(db->mutex);

  pBlob = (Incrblob *)sqlite3DbMallocZero(db, sizeof(Incrblob));
  if( !pBlob ) goto blob_open_out;
  pParse = sqlite3StackAllocRaw(db, sizeof(*pParse));
  if( !pParse ) goto blob_open_out;
77027
77028
77029
77030
77031
77032
77033
77034
77035
77036
77037
77038
77039
77040
77041
  sqlite3 *db;

  if( p==0 ) return SQLITE_MISUSE_BKPT;
  db = p->db;
  sqlite3_mutex_enter(db->mutex);
  v = (Vdbe*)p->pStmt;

  if( n<0 || iOffset<0 || (iOffset+n)>p->nByte ){
    /* Request is out of range. Return a transient error. */
    rc = SQLITE_ERROR;
  }else if( v==0 ){
    /* If there is no statement handle, then the blob-handle has
    ** already been invalidated. Return SQLITE_ABORT in this case.
    */
    rc = SQLITE_ABORT;







|







77148
77149
77150
77151
77152
77153
77154
77155
77156
77157
77158
77159
77160
77161
77162
  sqlite3 *db;

  if( p==0 ) return SQLITE_MISUSE_BKPT;
  db = p->db;
  sqlite3_mutex_enter(db->mutex);
  v = (Vdbe*)p->pStmt;

  if( n<0 || iOffset<0 || ((sqlite3_int64)iOffset+n)>p->nByte ){
    /* Request is out of range. Return a transient error. */
    rc = SQLITE_ERROR;
  }else if( v==0 ){
    /* If there is no statement handle, then the blob-handle has
    ** already been invalidated. Return SQLITE_ABORT in this case.
    */
    rc = SQLITE_ABORT;
80571
80572
80573
80574
80575
80576
80577
80578
80579
80580

80581
80582
80583
80584
80585
80586
80587
  ** schema.  If not found, pSchema will remain NULL and nothing will match
  ** resulting in an appropriate error message toward the end of this routine
  */
  if( zDb ){
    testcase( pNC->ncFlags & NC_PartIdx );
    testcase( pNC->ncFlags & NC_IsCheck );
    if( (pNC->ncFlags & (NC_PartIdx|NC_IsCheck))!=0 ){
      /* Silently ignore database qualifiers inside CHECK constraints and partial
      ** indices.  Do not raise errors because that might break legacy and
      ** because it does not hurt anything to just ignore the database name. */

      zDb = 0;
    }else{
      for(i=0; i<db->nDb; i++){
        assert( db->aDb[i].zName );
        if( sqlite3StrICmp(db->aDb[i].zName,zDb)==0 ){
          pSchema = db->aDb[i].pSchema;
          break;







|
|
|
>







80692
80693
80694
80695
80696
80697
80698
80699
80700
80701
80702
80703
80704
80705
80706
80707
80708
80709
  ** schema.  If not found, pSchema will remain NULL and nothing will match
  ** resulting in an appropriate error message toward the end of this routine
  */
  if( zDb ){
    testcase( pNC->ncFlags & NC_PartIdx );
    testcase( pNC->ncFlags & NC_IsCheck );
    if( (pNC->ncFlags & (NC_PartIdx|NC_IsCheck))!=0 ){
      /* Silently ignore database qualifiers inside CHECK constraints and
      ** partial indices.  Do not raise errors because that might break
      ** legacy and because it does not hurt anything to just ignore the
      ** database name. */
      zDb = 0;
    }else{
      for(i=0; i<db->nDb; i++){
        assert( db->aDb[i].zName );
        if( sqlite3StrICmp(db->aDb[i].zName,zDb)==0 ){
          pSchema = db->aDb[i].pSchema;
          break;
80644
80645
80646
80647
80648
80649
80650

80651
80652
80653
80654
80655
80656
80657
80658
            break;
          }
        }
      }
      if( pMatch ){
        pExpr->iTable = pMatch->iCursor;
        pExpr->pTab = pMatch->pTab;

        assert( (pMatch->jointype & JT_RIGHT)==0 ); /* RIGHT JOIN not (yet) supported */
        if( (pMatch->jointype & JT_LEFT)!=0 ){
          ExprSetProperty(pExpr, EP_CanBeNull);
        }
        pSchema = pExpr->pTab->pSchema;
      }
    } /* if( pSrcList ) */








>
|







80766
80767
80768
80769
80770
80771
80772
80773
80774
80775
80776
80777
80778
80779
80780
80781
            break;
          }
        }
      }
      if( pMatch ){
        pExpr->iTable = pMatch->iCursor;
        pExpr->pTab = pMatch->pTab;
        /* RIGHT JOIN not (yet) supported */
        assert( (pMatch->jointype & JT_RIGHT)==0 );
        if( (pMatch->jointype & JT_LEFT)!=0 ){
          ExprSetProperty(pExpr, EP_CanBeNull);
        }
        pSchema = pExpr->pTab->pSchema;
      }
    } /* if( pSrcList ) */

80965
80966
80967
80968
80969
80970
80971
80972

80973
80974
80975
80976
80977
80978
80979
      pExpr->op = TK_COLUMN;
      pExpr->pTab = pItem->pTab;
      pExpr->iTable = pItem->iCursor;
      pExpr->iColumn = -1;
      pExpr->affinity = SQLITE_AFF_INTEGER;
      break;
    }
#endif /* defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT) && !defined(SQLITE_OMIT_SUBQUERY) */


    /* A lone identifier is the name of a column.
    */
    case TK_ID: {
      return lookupName(pParse, 0, 0, pExpr->u.zToken, pNC, pExpr);
    }
  







|
>







81088
81089
81090
81091
81092
81093
81094
81095
81096
81097
81098
81099
81100
81101
81102
81103
      pExpr->op = TK_COLUMN;
      pExpr->pTab = pItem->pTab;
      pExpr->iTable = pItem->iCursor;
      pExpr->iColumn = -1;
      pExpr->affinity = SQLITE_AFF_INTEGER;
      break;
    }
#endif /* defined(SQLITE_ENABLE_UPDATE_DELETE_LIMIT)
          && !defined(SQLITE_OMIT_SUBQUERY) */

    /* A lone identifier is the name of a column.
    */
    case TK_ID: {
      return lookupName(pParse, 0, 0, pExpr->u.zToken, pNC, pExpr);
    }
  
81030
81031
81032
81033
81034
81035
81036

81037
81038
81039
81040
81041
81042
81043
81044
81045
81046
81047
81048
81049
81050
81051
81052
81053
81054
81055
81056
81057
81058
81059
81060
81061
81062
81063
81064
81065
81066


81067
81068
81069
81070
81071
81072
81073
      }else{
        is_agg = pDef->xFunc==0;
        if( pDef->funcFlags & SQLITE_FUNC_UNLIKELY ){
          ExprSetProperty(pExpr, EP_Unlikely|EP_Skip);
          if( n==2 ){
            pExpr->iTable = exprProbability(pList->a[1].pExpr);
            if( pExpr->iTable<0 ){

              sqlite3ErrorMsg(pParse, "second argument to likelihood() must be a "
                                      "constant between 0.0 and 1.0");
              pNC->nErr++;
            }
          }else{
            /* EVIDENCE-OF: R-61304-29449 The unlikely(X) function is equivalent to
            ** likelihood(X, 0.0625).
            ** EVIDENCE-OF: R-01283-11636 The unlikely(X) function is short-hand for
            ** likelihood(X,0.0625).
            ** EVIDENCE-OF: R-36850-34127 The likely(X) function is short-hand for
            ** likelihood(X,0.9375).
            ** EVIDENCE-OF: R-53436-40973 The likely(X) function is equivalent to
            ** likelihood(X,0.9375). */
            /* TUNING: unlikely() probability is 0.0625.  likely() is 0.9375 */
            pExpr->iTable = pDef->zName[0]=='u' ? 8388608 : 125829120;
          }             
        }
#ifndef SQLITE_OMIT_AUTHORIZATION
        auth = sqlite3AuthCheck(pParse, SQLITE_FUNCTION, 0, pDef->zName, 0);
        if( auth!=SQLITE_OK ){
          if( auth==SQLITE_DENY ){
            sqlite3ErrorMsg(pParse, "not authorized to use function: %s",
                                    pDef->zName);
            pNC->nErr++;
          }
          pExpr->op = TK_NULL;
          return WRC_Prune;
        }
#endif
        if( pDef->funcFlags & SQLITE_FUNC_CONSTANT ) ExprSetProperty(pExpr,EP_Constant);


      }
      if( is_agg && (pNC->ncFlags & NC_AllowAgg)==0 ){
        sqlite3ErrorMsg(pParse, "misuse of aggregate function %.*s()", nId,zId);
        pNC->nErr++;
        is_agg = 0;
      }else if( no_such_func && pParse->db->init.busy==0 ){
        sqlite3ErrorMsg(pParse, "no such function: %.*s", nId, zId);







>
|
|



|
|
|
|
|
|
|
|
















|
>
>







81154
81155
81156
81157
81158
81159
81160
81161
81162
81163
81164
81165
81166
81167
81168
81169
81170
81171
81172
81173
81174
81175
81176
81177
81178
81179
81180
81181
81182
81183
81184
81185
81186
81187
81188
81189
81190
81191
81192
81193
81194
81195
81196
81197
81198
81199
81200
      }else{
        is_agg = pDef->xFunc==0;
        if( pDef->funcFlags & SQLITE_FUNC_UNLIKELY ){
          ExprSetProperty(pExpr, EP_Unlikely|EP_Skip);
          if( n==2 ){
            pExpr->iTable = exprProbability(pList->a[1].pExpr);
            if( pExpr->iTable<0 ){
              sqlite3ErrorMsg(pParse,
                "second argument to likelihood() must be a "
                "constant between 0.0 and 1.0");
              pNC->nErr++;
            }
          }else{
            /* EVIDENCE-OF: R-61304-29449 The unlikely(X) function is
            ** equivalent to likelihood(X, 0.0625).
            ** EVIDENCE-OF: R-01283-11636 The unlikely(X) function is
            ** short-hand for likelihood(X,0.0625).
            ** EVIDENCE-OF: R-36850-34127 The likely(X) function is short-hand
            ** for likelihood(X,0.9375).
            ** EVIDENCE-OF: R-53436-40973 The likely(X) function is equivalent
            ** to likelihood(X,0.9375). */
            /* TUNING: unlikely() probability is 0.0625.  likely() is 0.9375 */
            pExpr->iTable = pDef->zName[0]=='u' ? 8388608 : 125829120;
          }             
        }
#ifndef SQLITE_OMIT_AUTHORIZATION
        auth = sqlite3AuthCheck(pParse, SQLITE_FUNCTION, 0, pDef->zName, 0);
        if( auth!=SQLITE_OK ){
          if( auth==SQLITE_DENY ){
            sqlite3ErrorMsg(pParse, "not authorized to use function: %s",
                                    pDef->zName);
            pNC->nErr++;
          }
          pExpr->op = TK_NULL;
          return WRC_Prune;
        }
#endif
        if( pDef->funcFlags & SQLITE_FUNC_CONSTANT ){
          ExprSetProperty(pExpr,EP_ConstFunc);
        }
      }
      if( is_agg && (pNC->ncFlags & NC_AllowAgg)==0 ){
        sqlite3ErrorMsg(pParse, "misuse of aggregate function %.*s()", nId,zId);
        pNC->nErr++;
        is_agg = 0;
      }else if( no_such_func && pParse->db->init.busy==0 ){
        sqlite3ErrorMsg(pParse, "no such function: %.*s", nId, zId);
81370
81371
81372
81373
81374
81375
81376
81377

81378
81379
81380
81381
81382
81383
81384
  assert( pEList!=0 );  /* sqlite3SelectNew() guarantees this */
  for(i=0, pItem=pOrderBy->a; i<pOrderBy->nExpr; i++, pItem++){
    if( pItem->u.x.iOrderByCol ){
      if( pItem->u.x.iOrderByCol>pEList->nExpr ){
        resolveOutOfRangeError(pParse, zType, i+1, pEList->nExpr);
        return 1;
      }
      resolveAlias(pParse, pEList, pItem->u.x.iOrderByCol-1, pItem->pExpr, zType,0);

    }
  }
  return 0;
}

/*
** pOrderBy is an ORDER BY or GROUP BY clause in SELECT statement pSelect.







|
>







81497
81498
81499
81500
81501
81502
81503
81504
81505
81506
81507
81508
81509
81510
81511
81512
  assert( pEList!=0 );  /* sqlite3SelectNew() guarantees this */
  for(i=0, pItem=pOrderBy->a; i<pOrderBy->nExpr; i++, pItem++){
    if( pItem->u.x.iOrderByCol ){
      if( pItem->u.x.iOrderByCol>pEList->nExpr ){
        resolveOutOfRangeError(pParse, zType, i+1, pEList->nExpr);
        return 1;
      }
      resolveAlias(pParse, pEList, pItem->u.x.iOrderByCol-1, pItem->pExpr,
                   zType,0);
    }
  }
  return 0;
}

/*
** pOrderBy is an ORDER BY or GROUP BY clause in SELECT statement pSelect.
81923
81924
81925
81926
81927
81928
81929
81930
81931
81932

81933
81934
81935
81936
81937
81938
81939
81940
81941
81942
81943
81944
81945
81946
81947















81948
81949
81950
81951
81952
81953
81954
      p = p->pLeft;
      continue;
    }
    if( op==TK_COLLATE || (op==TK_REGISTER && p->op2==TK_COLLATE) ){
      pColl = sqlite3GetCollSeq(pParse, ENC(db), 0, p->u.zToken);
      break;
    }
    if( p->pTab!=0
     && (op==TK_AGG_COLUMN || op==TK_COLUMN
          || op==TK_REGISTER || op==TK_TRIGGER)

    ){
      /* op==TK_REGISTER && p->pTab!=0 happens when pExpr was originally
      ** a TK_COLUMN but was previously evaluated and cached in a register */
      int j = p->iColumn;
      if( j>=0 ){
        const char *zColl = p->pTab->aCol[j].zColl;
        pColl = sqlite3FindCollSeq(db, ENC(db), zColl, 0);
      }
      break;
    }
    if( p->flags & EP_Collate ){
      if( ALWAYS(p->pLeft) && (p->pLeft->flags & EP_Collate)!=0 ){
        p = p->pLeft;
      }else{
        p = p->pRight;















      }
    }else{
      break;
    }
  }
  if( sqlite3CheckCollSeq(pParse, pColl) ){ 
    pColl = 0;







<
|

>











|


|
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







82051
82052
82053
82054
82055
82056
82057

82058
82059
82060
82061
82062
82063
82064
82065
82066
82067
82068
82069
82070
82071
82072
82073
82074
82075
82076
82077
82078
82079
82080
82081
82082
82083
82084
82085
82086
82087
82088
82089
82090
82091
82092
82093
82094
82095
82096
82097
      p = p->pLeft;
      continue;
    }
    if( op==TK_COLLATE || (op==TK_REGISTER && p->op2==TK_COLLATE) ){
      pColl = sqlite3GetCollSeq(pParse, ENC(db), 0, p->u.zToken);
      break;
    }

    if( (op==TK_AGG_COLUMN || op==TK_COLUMN
          || op==TK_REGISTER || op==TK_TRIGGER)
     && p->pTab!=0
    ){
      /* op==TK_REGISTER && p->pTab!=0 happens when pExpr was originally
      ** a TK_COLUMN but was previously evaluated and cached in a register */
      int j = p->iColumn;
      if( j>=0 ){
        const char *zColl = p->pTab->aCol[j].zColl;
        pColl = sqlite3FindCollSeq(db, ENC(db), zColl, 0);
      }
      break;
    }
    if( p->flags & EP_Collate ){
      if( p->pLeft && (p->pLeft->flags & EP_Collate)!=0 ){
        p = p->pLeft;
      }else{
        Expr *pNext  = p->pRight;
        /* The Expr.x union is never used at the same time as Expr.pRight */
        assert( p->x.pList==0 || p->pRight==0 );
        /* p->flags holds EP_Collate and p->pLeft->flags does not.  And
        ** p->x.pSelect cannot.  So if p->x.pLeft exists, it must hold at
        ** least one EP_Collate. Thus the following two ALWAYS. */
        if( p->x.pList!=0 && ALWAYS(!ExprHasProperty(p, EP_xIsSelect)) ){
          int i;
          for(i=0; ALWAYS(i<p->x.pList->nExpr); i++){
            if( ExprHasProperty(p->x.pList->a[i].pExpr, EP_Collate) ){
              pNext = p->x.pList->a[i].pExpr;
              break;
            }
          }
        }
        p = pNext;
      }
    }else{
      break;
    }
  }
  if( sqlite3CheckCollSeq(pParse, pColl) ){ 
    pColl = 0;
82146
82147
82148
82149
82150
82151
82152



82153
82154
82155
82156
82157
82158
82159
82160
82161

82162
82163
82164
82165
82166
82167
82168
82169



82170
82171
82172
82173
82174
82175
82176
82177
82178
82179
82180
82181
82182
82183
82184
82185









82186
82187
82188
82189
82190
82191
82192
82193

/*
** Set the Expr.nHeight variable in the structure passed as an 
** argument. An expression with no children, Expr.pList or 
** Expr.pSelect member has a height of 1. Any other expression
** has a height equal to the maximum height of any other 
** referenced Expr plus one.



*/
static void exprSetHeight(Expr *p){
  int nHeight = 0;
  heightOfExpr(p->pLeft, &nHeight);
  heightOfExpr(p->pRight, &nHeight);
  if( ExprHasProperty(p, EP_xIsSelect) ){
    heightOfSelect(p->x.pSelect, &nHeight);
  }else{
    heightOfExprList(p->x.pList, &nHeight);

  }
  p->nHeight = nHeight + 1;
}

/*
** Set the Expr.nHeight variable using the exprSetHeight() function. If
** the height is greater than the maximum allowed expression depth,
** leave an error in pParse.



*/
SQLITE_PRIVATE void sqlite3ExprSetHeight(Parse *pParse, Expr *p){
  exprSetHeight(p);
  sqlite3ExprCheckHeight(pParse, p->nHeight);
}

/*
** Return the maximum height of any expression tree referenced
** by the select statement passed as an argument.
*/
SQLITE_PRIVATE int sqlite3SelectExprHeight(Select *p){
  int nHeight = 0;
  heightOfSelect(p, &nHeight);
  return nHeight;
}
#else









  #define exprSetHeight(y)
#endif /* SQLITE_MAX_EXPR_DEPTH>0 */

/*
** This routine is the core allocator for Expr nodes.
**
** Construct a new expression node and return a pointer to it.  Memory
** for this node and for the pToken argument is a single allocation







>
>
>







|

>








>
>
>

|













|
>
>
>
>
>
>
>
>
>
|







82289
82290
82291
82292
82293
82294
82295
82296
82297
82298
82299
82300
82301
82302
82303
82304
82305
82306
82307
82308
82309
82310
82311
82312
82313
82314
82315
82316
82317
82318
82319
82320
82321
82322
82323
82324
82325
82326
82327
82328
82329
82330
82331
82332
82333
82334
82335
82336
82337
82338
82339
82340
82341
82342
82343
82344
82345
82346
82347
82348
82349
82350
82351
82352

/*
** Set the Expr.nHeight variable in the structure passed as an 
** argument. An expression with no children, Expr.pList or 
** Expr.pSelect member has a height of 1. Any other expression
** has a height equal to the maximum height of any other 
** referenced Expr plus one.
**
** Also propagate EP_Propagate flags up from Expr.x.pList to Expr.flags,
** if appropriate.
*/
static void exprSetHeight(Expr *p){
  int nHeight = 0;
  heightOfExpr(p->pLeft, &nHeight);
  heightOfExpr(p->pRight, &nHeight);
  if( ExprHasProperty(p, EP_xIsSelect) ){
    heightOfSelect(p->x.pSelect, &nHeight);
  }else if( p->x.pList ){
    heightOfExprList(p->x.pList, &nHeight);
    p->flags |= EP_Propagate & sqlite3ExprListFlags(p->x.pList);
  }
  p->nHeight = nHeight + 1;
}

/*
** Set the Expr.nHeight variable using the exprSetHeight() function. If
** the height is greater than the maximum allowed expression depth,
** leave an error in pParse.
**
** Also propagate all EP_Propagate flags from the Expr.x.pList into
** Expr.flags. 
*/
SQLITE_PRIVATE void sqlite3ExprSetHeightAndFlags(Parse *pParse, Expr *p){
  exprSetHeight(p);
  sqlite3ExprCheckHeight(pParse, p->nHeight);
}

/*
** Return the maximum height of any expression tree referenced
** by the select statement passed as an argument.
*/
SQLITE_PRIVATE int sqlite3SelectExprHeight(Select *p){
  int nHeight = 0;
  heightOfSelect(p, &nHeight);
  return nHeight;
}
#else /* ABOVE:  Height enforcement enabled.  BELOW: Height enforcement off */
/*
** Propagate all EP_Propagate flags from the Expr.x.pList into
** Expr.flags. 
*/
SQLITE_PRIVATE void sqlite3ExprSetHeightAndFlags(Parse *pParse, Expr *p){
  if( p && p->x.pList && !ExprHasProperty(p, EP_xIsSelect) ){
    p->flags |= EP_Propagate & sqlite3ExprListFlags(p->x.pList);
  }
}
#define exprSetHeight(y)
#endif /* SQLITE_MAX_EXPR_DEPTH>0 */

/*
** This routine is the core allocator for Expr nodes.
**
** Construct a new expression node and return a pointer to it.  Memory
** for this node and for the pToken argument is a single allocation
82281
82282
82283
82284
82285
82286
82287
82288
82289
82290
82291
82292
82293
82294
82295
82296
82297
82298
82299
  if( pRoot==0 ){
    assert( db->mallocFailed );
    sqlite3ExprDelete(db, pLeft);
    sqlite3ExprDelete(db, pRight);
  }else{
    if( pRight ){
      pRoot->pRight = pRight;
      pRoot->flags |= EP_Collate & pRight->flags;
    }
    if( pLeft ){
      pRoot->pLeft = pLeft;
      pRoot->flags |= EP_Collate & pLeft->flags;
    }
    exprSetHeight(pRoot);
  }
}

/*
** Allocate an Expr node which joins as many as two subtrees.







|



|







82440
82441
82442
82443
82444
82445
82446
82447
82448
82449
82450
82451
82452
82453
82454
82455
82456
82457
82458
  if( pRoot==0 ){
    assert( db->mallocFailed );
    sqlite3ExprDelete(db, pLeft);
    sqlite3ExprDelete(db, pRight);
  }else{
    if( pRight ){
      pRoot->pRight = pRight;
      pRoot->flags |= EP_Propagate & pRight->flags;
    }
    if( pLeft ){
      pRoot->pLeft = pLeft;
      pRoot->flags |= EP_Propagate & pLeft->flags;
    }
    exprSetHeight(pRoot);
  }
}

/*
** Allocate an Expr node which joins as many as two subtrees.
82385
82386
82387
82388
82389
82390
82391
82392
82393
82394
82395
82396
82397
82398
82399
  pNew = sqlite3ExprAlloc(db, TK_FUNCTION, pToken, 1);
  if( pNew==0 ){
    sqlite3ExprListDelete(db, pList); /* Avoid memory leak when malloc fails */
    return 0;
  }
  pNew->x.pList = pList;
  assert( !ExprHasProperty(pNew, EP_xIsSelect) );
  sqlite3ExprSetHeight(pParse, pNew);
  return pNew;
}

/*
** Assign a variable number to an expression that encodes a wildcard
** in the original SQL statement.  
**







|







82544
82545
82546
82547
82548
82549
82550
82551
82552
82553
82554
82555
82556
82557
82558
  pNew = sqlite3ExprAlloc(db, TK_FUNCTION, pToken, 1);
  if( pNew==0 ){
    sqlite3ExprListDelete(db, pList); /* Avoid memory leak when malloc fails */
    return 0;
  }
  pNew->x.pList = pList;
  assert( !ExprHasProperty(pNew, EP_xIsSelect) );
  sqlite3ExprSetHeightAndFlags(pParse, pNew);
  return pNew;
}

/*
** Assign a variable number to an expression that encodes a wildcard
** in the original SQL statement.  
**
82999
83000
83001
83002
83003
83004
83005















83006
83007
83008
83009
83010
83011
83012
    sqlite3ExprDelete(db, pItem->pExpr);
    sqlite3DbFree(db, pItem->zName);
    sqlite3DbFree(db, pItem->zSpan);
  }
  sqlite3DbFree(db, pList->a);
  sqlite3DbFree(db, pList);
}
















/*
** These routines are Walker callbacks used to check expressions to
** see if they are "constant" for some definition of constant.  The
** Walker.eCode value determines the type of "constant" we are looking
** for.
**







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







83158
83159
83160
83161
83162
83163
83164
83165
83166
83167
83168
83169
83170
83171
83172
83173
83174
83175
83176
83177
83178
83179
83180
83181
83182
83183
83184
83185
83186
    sqlite3ExprDelete(db, pItem->pExpr);
    sqlite3DbFree(db, pItem->zName);
    sqlite3DbFree(db, pItem->zSpan);
  }
  sqlite3DbFree(db, pList->a);
  sqlite3DbFree(db, pList);
}

/*
** Return the bitwise-OR of all Expr.flags fields in the given
** ExprList.
*/
SQLITE_PRIVATE u32 sqlite3ExprListFlags(const ExprList *pList){
  int i;
  u32 m = 0;
  if( pList ){
    for(i=0; i<pList->nExpr; i++){
       m |= pList->a[i].pExpr->flags;
    }
  }
  return m;
}

/*
** These routines are Walker callbacks used to check expressions to
** see if they are "constant" for some definition of constant.  The
** Walker.eCode value determines the type of "constant" we are looking
** for.
**
83040
83041
83042
83043
83044
83045
83046
83047
83048
83049
83050
83051
83052
83053
83054
  }

  switch( pExpr->op ){
    /* Consider functions to be constant if all their arguments are constant
    ** and either pWalker->eCode==4 or 5 or the function has the
    ** SQLITE_FUNC_CONST flag. */
    case TK_FUNCTION:
      if( pWalker->eCode>=4 || ExprHasProperty(pExpr,EP_Constant) ){
        return WRC_Continue;
      }else{
        pWalker->eCode = 0;
        return WRC_Abort;
      }
    case TK_ID:
    case TK_COLUMN:







|







83214
83215
83216
83217
83218
83219
83220
83221
83222
83223
83224
83225
83226
83227
83228
  }

  switch( pExpr->op ){
    /* Consider functions to be constant if all their arguments are constant
    ** and either pWalker->eCode==4 or 5 or the function has the
    ** SQLITE_FUNC_CONST flag. */
    case TK_FUNCTION:
      if( pWalker->eCode>=4 || ExprHasProperty(pExpr,EP_ConstFunc) ){
        return WRC_Continue;
      }else{
        pWalker->eCode = 0;
        return WRC_Abort;
      }
    case TK_ID:
    case TK_COLUMN:
84047
84048
84049
84050
84051
84052
84053
84054

84055
84056
84057
84058
84059
84060
84061
*/
SQLITE_PRIVATE void sqlite3ExprCacheStore(Parse *pParse, int iTab, int iCol, int iReg){
  int i;
  int minLru;
  int idxLru;
  struct yColCache *p;

  assert( iReg>0 );  /* Register numbers are always positive */

  assert( iCol>=-1 && iCol<32768 );  /* Finite column numbers */

  /* The SQLITE_ColumnCache flag disables the column cache.  This is used
  ** for testing only - to verify that SQLite always gets the same answer
  ** with and without the column cache.
  */
  if( OptimizationDisabled(pParse->db, SQLITE_ColumnCache) ) return;







|
>







84221
84222
84223
84224
84225
84226
84227
84228
84229
84230
84231
84232
84233
84234
84235
84236
*/
SQLITE_PRIVATE void sqlite3ExprCacheStore(Parse *pParse, int iTab, int iCol, int iReg){
  int i;
  int minLru;
  int idxLru;
  struct yColCache *p;

  /* Unless an error has occurred, register numbers are always positive. */
  assert( iReg>0 || pParse->nErr || pParse->db->mallocFailed );
  assert( iCol>=-1 && iCol<32768 );  /* Finite column numbers */

  /* The SQLITE_ColumnCache flag disables the column cache.  This is used
  ** for testing only - to verify that SQLite always gets the same answer
  ** with and without the column cache.
  */
  if( OptimizationDisabled(pParse->db, SQLITE_ColumnCache) ) return;
89076
89077
89078
89079
89080
89081
89082
89083
89084
89085
89086
89087
89088
89089
89090
        zKey = (char *)sqlite3_value_blob(argv[2]);
        rc = sqlite3CodecAttach(db, db->nDb-1, zKey, nKey);
        break;

      case SQLITE_NULL:
        /* No key specified.  Use the key from the main database */
        sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey);
        if( nKey>0 || sqlite3BtreeGetReserve(db->aDb[0].pBt)>0 ){
          rc = sqlite3CodecAttach(db, db->nDb-1, zKey, nKey);
        }
        break;
    }
  }
#endif








|







89251
89252
89253
89254
89255
89256
89257
89258
89259
89260
89261
89262
89263
89264
89265
        zKey = (char *)sqlite3_value_blob(argv[2]);
        rc = sqlite3CodecAttach(db, db->nDb-1, zKey, nKey);
        break;

      case SQLITE_NULL:
        /* No key specified.  Use the key from the main database */
        sqlite3CodecGetKey(db, 0, (void**)&zKey, &nKey);
        if( nKey>0 || sqlite3BtreeGetOptimalReserve(db->aDb[0].pBt)>0 ){
          rc = sqlite3CodecAttach(db, db->nDb-1, zKey, nKey);
        }
        break;
    }
  }
#endif

90042
90043
90044
90045
90046
90047
90048
90049
90050
90051
90052
90053
90054
90055
90056
90057
90058
90059
**
** See also sqlite3LocateTable().
*/
SQLITE_PRIVATE Table *sqlite3FindTable(sqlite3 *db, const char *zName, const char *zDatabase){
  Table *p = 0;
  int i;

#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) || zName==0 ) return 0;
#endif

  /* All mutexes are required for schema access.  Make sure we hold them. */
  assert( zDatabase!=0 || sqlite3BtreeHoldsAllMutexes(db) );
#if SQLITE_USER_AUTHENTICATION
  /* Only the admin user is allowed to know that the sqlite_user table
  ** exists */
  if( db->auth.authLevel<UAUTH_Admin && sqlite3UserAuthTable(zName)!=0 ){
    return 0;







<
<
<
<







90217
90218
90219
90220
90221
90222
90223




90224
90225
90226
90227
90228
90229
90230
**
** See also sqlite3LocateTable().
*/
SQLITE_PRIVATE Table *sqlite3FindTable(sqlite3 *db, const char *zName, const char *zDatabase){
  Table *p = 0;
  int i;





  /* All mutexes are required for schema access.  Make sure we hold them. */
  assert( zDatabase!=0 || sqlite3BtreeHoldsAllMutexes(db) );
#if SQLITE_USER_AUTHENTICATION
  /* Only the admin user is allowed to know that the sqlite_user table
  ** exists */
  if( db->auth.authLevel<UAUTH_Admin && sqlite3UserAuthTable(zName)!=0 ){
    return 0;
91465
91466
91467
91468
91469
91470
91471
91472


91473
91474
91475
91476

91477
91478
91479
91480
91481
91482
91483
    }
    pPk->nKeyCol = j;
  }
  pPk->isCovering = 1;
  assert( pPk!=0 );
  nPk = pPk->nKeyCol;

  /* Make sure every column of the PRIMARY KEY is NOT NULL */


  for(i=0; i<nPk; i++){
    pTab->aCol[pPk->aiColumn[i]].notNull = 1;
  }
  pPk->uniqNotNull = 1;


  /* The root page of the PRIMARY KEY is the table root page */
  pPk->tnum = pTab->tnum;

  /* Update the in-memory representation of all UNIQUE indices by converting
  ** the final rowid column into one or more columns of the PRIMARY KEY.
  */







|
>
>
|
|
|
|
>







91636
91637
91638
91639
91640
91641
91642
91643
91644
91645
91646
91647
91648
91649
91650
91651
91652
91653
91654
91655
91656
91657
    }
    pPk->nKeyCol = j;
  }
  pPk->isCovering = 1;
  assert( pPk!=0 );
  nPk = pPk->nKeyCol;

  /* Make sure every column of the PRIMARY KEY is NOT NULL.  (Except,
  ** do not enforce this for imposter tables.) */
  if( !db->init.imposterTable ){
    for(i=0; i<nPk; i++){
      pTab->aCol[pPk->aiColumn[i]].notNull = 1;
    }
    pPk->uniqNotNull = 1;
  }

  /* The root page of the PRIMARY KEY is the table root page */
  pPk->tnum = pTab->tnum;

  /* Update the in-memory representation of all UNIQUE indices by converting
  ** the final rowid column into one or more columns of the PRIMARY KEY.
  */
94704
94705
94706
94707
94708
94709
94710
94711
94712
94713
94714
94715
94716
94717
94718
  pWhereRowid = sqlite3PExpr(pParse, TK_ROW, 0, 0, 0);
  if( pWhereRowid == 0 ) goto limit_where_cleanup_1;
  pInClause = sqlite3PExpr(pParse, TK_IN, pWhereRowid, 0, 0);
  if( pInClause == 0 ) goto limit_where_cleanup_1;

  pInClause->x.pSelect = pSelect;
  pInClause->flags |= EP_xIsSelect;
  sqlite3ExprSetHeight(pParse, pInClause);
  return pInClause;

  /* something went wrong. clean up anything allocated. */
limit_where_cleanup_1:
  sqlite3SelectDelete(pParse->db, pSelect);
  return 0;








|







94878
94879
94880
94881
94882
94883
94884
94885
94886
94887
94888
94889
94890
94891
94892
  pWhereRowid = sqlite3PExpr(pParse, TK_ROW, 0, 0, 0);
  if( pWhereRowid == 0 ) goto limit_where_cleanup_1;
  pInClause = sqlite3PExpr(pParse, TK_IN, pWhereRowid, 0, 0);
  if( pInClause == 0 ) goto limit_where_cleanup_1;

  pInClause->x.pSelect = pSelect;
  pInClause->flags |= EP_xIsSelect;
  sqlite3ExprSetHeightAndFlags(pParse, pInClause);
  return pInClause;

  /* something went wrong. clean up anything allocated. */
limit_where_cleanup_1:
  sqlite3SelectDelete(pParse->db, pSelect);
  return 0;

95646
95647
95648
95649
95650
95651
95652








95653
95654
95655
95656
95657
95658
95659
    len = 0;
    if( p1<0 ){
      for(z2=z; *z2; len++){
        SQLITE_SKIP_UTF8(z2);
      }
    }
  }








  if( argc==3 ){
    p2 = sqlite3_value_int(argv[2]);
    if( p2<0 ){
      p2 = -p2;
      negP2 = 1;
    }
  }else{







>
>
>
>
>
>
>
>







95820
95821
95822
95823
95824
95825
95826
95827
95828
95829
95830
95831
95832
95833
95834
95835
95836
95837
95838
95839
95840
95841
    len = 0;
    if( p1<0 ){
      for(z2=z; *z2; len++){
        SQLITE_SKIP_UTF8(z2);
      }
    }
  }
#ifdef SQLITE_SUBSTR_COMPATIBILITY
  /* If SUBSTR_COMPATIBILITY is defined then substr(X,0,N) work the same as
  ** as substr(X,1,N) - it returns the first N characters of X.  This
  ** is essentially a back-out of the bug-fix in check-in [5fc125d362df4b8]
  ** from 2009-02-02 for compatibility of applications that exploited the
  ** old buggy behavior. */
  if( p1==0 ) p1 = 1; /* <rdar://problem/6778339> */
#endif
  if( argc==3 ){
    p2 = sqlite3_value_int(argv[2]);
    if( p2<0 ){
      p2 = -p2;
      negP2 = 1;
    }
  }else{
102030
102031
102032
102033
102034
102035
102036

102037
102038


102039




102040
102041
102042
102043
102044
102045
102046
102047
102048
#    define SQLITE_ENABLE_LOCKING_STYLE 1
#  else
#    define SQLITE_ENABLE_LOCKING_STYLE 0
#  endif
#endif

/***************************************************************************

** The next block of code, including the PragTyp_XXXX macro definitions and
** the aPragmaName[] object is composed of generated code. DO NOT EDIT.


**




** To add new pragmas, edit the code in ../tool/mkpragmatab.tcl and rerun
** that script.  Then copy/paste the output in place of the following:
*/
#define PragTyp_HEADER_VALUE                   0
#define PragTyp_AUTO_VACUUM                    1
#define PragTyp_FLAG                           2
#define PragTyp_BUSY_TIMEOUT                   3
#define PragTyp_CACHE_SIZE                     4
#define PragTyp_CASE_SENSITIVE_LIKE            5







>
|
|
>
>
|
>
>
>
>
|
|







102212
102213
102214
102215
102216
102217
102218
102219
102220
102221
102222
102223
102224
102225
102226
102227
102228
102229
102230
102231
102232
102233
102234
102235
102236
102237
#    define SQLITE_ENABLE_LOCKING_STYLE 1
#  else
#    define SQLITE_ENABLE_LOCKING_STYLE 0
#  endif
#endif

/***************************************************************************
** The "pragma.h" include file is an automatically generated file that
** that includes the PragType_XXXX macro definitions and the aPragmaName[]
** object.  This ensures that the aPragmaName[] table is arranged in
** lexicographical order to facility a binary search of the pragma name.
** Do not edit pragma.h directly.  Edit and rerun the script in at 
** ../tool/mkpragmatab.tcl. */
/************** Include pragma.h in the middle of pragma.c *******************/
/************** Begin file pragma.h ******************************************/
/* DO NOT EDIT!
** This file is automatically generated by the script at
** ../tool/mkpragmatab.tcl.  To update the set of pragmas, edit
** that script and rerun it.
*/
#define PragTyp_HEADER_VALUE                   0
#define PragTyp_AUTO_VACUUM                    1
#define PragTyp_FLAG                           2
#define PragTyp_BUSY_TIMEOUT                   3
#define PragTyp_CACHE_SIZE                     4
#define PragTyp_CASE_SENSITIVE_LIKE            5
102269
102270
102271
102272
102273
102274
102275




102276
102277
102278
102279
102280
102281
102282
    /* ePragTyp:  */ PragTyp_INDEX_INFO,
    /* ePragFlag: */ PragFlag_NeedSchema,
    /* iArg:      */ 0 },
  { /* zName:     */ "index_list",
    /* ePragTyp:  */ PragTyp_INDEX_LIST,
    /* ePragFlag: */ PragFlag_NeedSchema,
    /* iArg:      */ 0 },




#endif
#if !defined(SQLITE_OMIT_INTEGRITY_CHECK)
  { /* zName:     */ "integrity_check",
    /* ePragTyp:  */ PragTyp_INTEGRITY_CHECK,
    /* ePragFlag: */ PragFlag_NeedSchema,
    /* iArg:      */ 0 },
#endif







>
>
>
>







102458
102459
102460
102461
102462
102463
102464
102465
102466
102467
102468
102469
102470
102471
102472
102473
102474
102475
    /* ePragTyp:  */ PragTyp_INDEX_INFO,
    /* ePragFlag: */ PragFlag_NeedSchema,
    /* iArg:      */ 0 },
  { /* zName:     */ "index_list",
    /* ePragTyp:  */ PragTyp_INDEX_LIST,
    /* ePragFlag: */ PragFlag_NeedSchema,
    /* iArg:      */ 0 },
  { /* zName:     */ "index_xinfo",
    /* ePragTyp:  */ PragTyp_INDEX_INFO,
    /* ePragFlag: */ PragFlag_NeedSchema,
    /* iArg:      */ 1 },
#endif
#if !defined(SQLITE_OMIT_INTEGRITY_CHECK)
  { /* zName:     */ "integrity_check",
    /* ePragTyp:  */ PragTyp_INTEGRITY_CHECK,
    /* ePragFlag: */ PragFlag_NeedSchema,
    /* iArg:      */ 0 },
#endif
102485
102486
102487
102488
102489
102490
102491
102492
102493
102494

102495
102496
102497
102498
102499
102500
102501
#if !defined(SQLITE_OMIT_FLAG_PRAGMAS)
  { /* zName:     */ "writable_schema",
    /* ePragTyp:  */ PragTyp_FLAG,
    /* ePragFlag: */ 0,
    /* iArg:      */ SQLITE_WriteSchema|SQLITE_RecoveryMode },
#endif
};
/* Number of pragmas: 58 on by default, 71 total. */
/* End of the automatically generated pragma table.
***************************************************************************/


/*
** Interpret the given string as a safety level.  Return 0 for OFF,
** 1 for ON or NORMAL and 2 for FULL.  Return 1 for an empty or 
** unrecognized string argument.  The FULL option is disallowed
** if the omitFull parameter it 1.
**







|
|
|
>







102678
102679
102680
102681
102682
102683
102684
102685
102686
102687
102688
102689
102690
102691
102692
102693
102694
102695
#if !defined(SQLITE_OMIT_FLAG_PRAGMAS)
  { /* zName:     */ "writable_schema",
    /* ePragTyp:  */ PragTyp_FLAG,
    /* ePragFlag: */ 0,
    /* iArg:      */ SQLITE_WriteSchema|SQLITE_RecoveryMode },
#endif
};
/* Number of pragmas: 59 on by default, 72 total. */

/************** End of pragma.h **********************************************/
/************** Continuing where we left off in pragma.c *********************/

/*
** Interpret the given string as a safety level.  Return 0 for OFF,
** 1 for ON or NORMAL and 2 for FULL.  Return 1 for an empty or 
** unrecognized string argument.  The FULL option is disallowed
** if the omitFull parameter it 1.
**
102740
102741
102742
102743
102744
102745
102746

102747
102748
102749
102750
102751
102752
102753
  char *aFcntl[4];       /* Argument to SQLITE_FCNTL_PRAGMA */
  int iDb;               /* Database index for <database> */
  int lwr, upr, mid = 0;       /* Binary search bounds */
  int rc;                      /* return value form SQLITE_FCNTL_PRAGMA */
  sqlite3 *db = pParse->db;    /* The database connection */
  Db *pDb;                     /* The specific database being pragmaed */
  Vdbe *v = sqlite3GetVdbe(pParse);  /* Prepared statement */


  if( v==0 ) return;
  sqlite3VdbeRunOnlyOnce(v);
  pParse->nMem = 2;

  /* Interpret the [database.] part of the pragma statement. iDb is the
  ** index of the database this pragma is being applied to in db.aDb[]. */







>







102934
102935
102936
102937
102938
102939
102940
102941
102942
102943
102944
102945
102946
102947
102948
  char *aFcntl[4];       /* Argument to SQLITE_FCNTL_PRAGMA */
  int iDb;               /* Database index for <database> */
  int lwr, upr, mid = 0;       /* Binary search bounds */
  int rc;                      /* return value form SQLITE_FCNTL_PRAGMA */
  sqlite3 *db = pParse->db;    /* The database connection */
  Db *pDb;                     /* The specific database being pragmaed */
  Vdbe *v = sqlite3GetVdbe(pParse);  /* Prepared statement */
  const struct sPragmaNames *pPragma;

  if( v==0 ) return;
  sqlite3VdbeRunOnlyOnce(v);
  pParse->nMem = 2;

  /* Interpret the [database.] part of the pragma statement. iDb is the
  ** index of the database this pragma is being applied to in db.aDb[]. */
102817
102818
102819
102820
102821
102822
102823

102824
102825
102826
102827
102828
102829
102830
102831
102832
102833
102834
102835
102836
102837
102838
    if( rc<0 ){
      upr = mid - 1;
    }else{
      lwr = mid + 1;
    }
  }
  if( lwr>upr ) goto pragma_out;


  /* Make sure the database schema is loaded if the pragma requires that */
  if( (aPragmaNames[mid].mPragFlag & PragFlag_NeedSchema)!=0 ){
    if( sqlite3ReadSchema(pParse) ) goto pragma_out;
  }

  /* Jump to the appropriate pragma handler */
  switch( aPragmaNames[mid].ePragTyp ){
  
#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) && !defined(SQLITE_OMIT_DEPRECATED)
  /*
  **  PRAGMA [database.]default_cache_size
  **  PRAGMA [database.]default_cache_size=N
  **
  ** The first form reports the current persistent setting for the







>


|




|







103012
103013
103014
103015
103016
103017
103018
103019
103020
103021
103022
103023
103024
103025
103026
103027
103028
103029
103030
103031
103032
103033
103034
    if( rc<0 ){
      upr = mid - 1;
    }else{
      lwr = mid + 1;
    }
  }
  if( lwr>upr ) goto pragma_out;
  pPragma = &aPragmaNames[mid];

  /* Make sure the database schema is loaded if the pragma requires that */
  if( (pPragma->mPragFlag & PragFlag_NeedSchema)!=0 ){
    if( sqlite3ReadSchema(pParse) ) goto pragma_out;
  }

  /* Jump to the appropriate pragma handler */
  switch( pPragma->ePragTyp ){
  
#if !defined(SQLITE_OMIT_PAGER_PRAGMAS) && !defined(SQLITE_OMIT_DEPRECATED)
  /*
  **  PRAGMA [database.]default_cache_size
  **  PRAGMA [database.]default_cache_size=N
  **
  ** The first form reports the current persistent setting for the
103403
103404
103405
103406
103407
103408
103409
103410
103411
103412
103413
103414
103415
103416
103417
103418
103419
103420
    break;
  }
#endif /* SQLITE_OMIT_PAGER_PRAGMAS */

#ifndef SQLITE_OMIT_FLAG_PRAGMAS
  case PragTyp_FLAG: {
    if( zRight==0 ){
      returnSingleInt(pParse, aPragmaNames[mid].zName,
                     (db->flags & aPragmaNames[mid].iArg)!=0 );
    }else{
      int mask = aPragmaNames[mid].iArg;    /* Mask of bits to set or clear. */
      if( db->autoCommit==0 ){
        /* Foreign key support may not be enabled or disabled while not
        ** in auto-commit mode.  */
        mask &= ~(SQLITE_ForeignKeys);
      }
#if SQLITE_USER_AUTHENTICATION
      if( db->auth.authLevel==UAUTH_User ){







|
<

|







103599
103600
103601
103602
103603
103604
103605
103606

103607
103608
103609
103610
103611
103612
103613
103614
103615
    break;
  }
#endif /* SQLITE_OMIT_PAGER_PRAGMAS */

#ifndef SQLITE_OMIT_FLAG_PRAGMAS
  case PragTyp_FLAG: {
    if( zRight==0 ){
      returnSingleInt(pParse, pPragma->zName, (db->flags & pPragma->iArg)!=0 );

    }else{
      int mask = pPragma->iArg;    /* Mask of bits to set or clear. */
      if( db->autoCommit==0 ){
        /* Foreign key support may not be enabled or disabled while not
        ** in auto-commit mode.  */
        mask &= ~(SQLITE_ForeignKeys);
      }
#if SQLITE_USER_AUTHENTICATION
      if( db->auth.authLevel==UAUTH_User ){
103535
103536
103537
103538
103539
103540
103541

103542
103543
103544
103545
103546
103547
103548



103549
103550
103551
103552
103553


103554




103555
103556
103557
103558
103559
103560
103561
103562
103563
103564
103565
103566
103567
103568
103569
103570
103571
103572
103573


103574

103575
103576
103577


103578
103579
103580
103581
103582
103583
103584
103585

  case PragTyp_INDEX_INFO: if( zRight ){
    Index *pIdx;
    Table *pTab;
    pIdx = sqlite3FindIndex(db, zRight, zDb);
    if( pIdx ){
      int i;

      pTab = pIdx->pTable;
      sqlite3VdbeSetNumCols(v, 3);
      pParse->nMem = 3;
      sqlite3CodeVerifySchema(pParse, iDb);
      sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seqno", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "cid", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "name", SQLITE_STATIC);



      for(i=0; i<pIdx->nKeyCol; i++){
        i16 cnum = pIdx->aiColumn[i];
        sqlite3VdbeAddOp2(v, OP_Integer, i, 1);
        sqlite3VdbeAddOp2(v, OP_Integer, cnum, 2);
        assert( pTab->nCol>cnum );


        sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, pTab->aCol[cnum].zName, 0);




        sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3);
      }
    }
  }
  break;

  case PragTyp_INDEX_LIST: if( zRight ){
    Index *pIdx;
    Table *pTab;
    int i;
    pTab = sqlite3FindTable(db, zRight, zDb);
    if( pTab ){
      v = sqlite3GetVdbe(pParse);
      sqlite3VdbeSetNumCols(v, 3);
      pParse->nMem = 3;
      sqlite3CodeVerifySchema(pParse, iDb);
      sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "unique", SQLITE_STATIC);


      for(pIdx=pTab->pIndex, i=0; pIdx; pIdx=pIdx->pNext, i++){

        sqlite3VdbeAddOp2(v, OP_Integer, i, 1);
        sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pIdx->zName, 0);
        sqlite3VdbeAddOp2(v, OP_Integer, IsUniqueIndex(pIdx), 3);


        sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 3);
      }
    }
  }
  break;

  case PragTyp_DATABASE_LIST: {
    int i;







>

|
|




>
>
>
|



|
>
>
|
>
>
>
>
|












|
|




>
>

>



>
>
|







103730
103731
103732
103733
103734
103735
103736
103737
103738
103739
103740
103741
103742
103743
103744
103745
103746
103747
103748
103749
103750
103751
103752
103753
103754
103755
103756
103757
103758
103759
103760
103761
103762
103763
103764
103765
103766
103767
103768
103769
103770
103771
103772
103773
103774
103775
103776
103777
103778
103779
103780
103781
103782
103783
103784
103785
103786
103787
103788
103789
103790
103791
103792
103793
103794
103795

  case PragTyp_INDEX_INFO: if( zRight ){
    Index *pIdx;
    Table *pTab;
    pIdx = sqlite3FindIndex(db, zRight, zDb);
    if( pIdx ){
      int i;
      int mx = pPragma->iArg ? pIdx->nColumn : pIdx->nKeyCol;
      pTab = pIdx->pTable;
      sqlite3VdbeSetNumCols(v, 6);
      pParse->nMem = 6;
      sqlite3CodeVerifySchema(pParse, iDb);
      sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seqno", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "cid", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "name", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "desc", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "coll", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 5, COLNAME_NAME, "key", SQLITE_STATIC);
      for(i=0; i<mx; i++){
        i16 cnum = pIdx->aiColumn[i];
        sqlite3VdbeAddOp2(v, OP_Integer, i, 1);
        sqlite3VdbeAddOp2(v, OP_Integer, cnum, 2);
        if( cnum<0 ){
          sqlite3VdbeAddOp2(v, OP_Null, 0, 3);
        }else{
          sqlite3VdbeAddOp4(v, OP_String8, 0, 3, 0, pTab->aCol[cnum].zName, 0);
        }
        sqlite3VdbeAddOp2(v, OP_Integer, pIdx->aSortOrder[i], 4);
        sqlite3VdbeAddOp4(v, OP_String8, 0, 5, 0, pIdx->azColl[i], 0);
        sqlite3VdbeAddOp2(v, OP_Integer, i<pIdx->nKeyCol, 6);
        sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 6);
      }
    }
  }
  break;

  case PragTyp_INDEX_LIST: if( zRight ){
    Index *pIdx;
    Table *pTab;
    int i;
    pTab = sqlite3FindTable(db, zRight, zDb);
    if( pTab ){
      v = sqlite3GetVdbe(pParse);
      sqlite3VdbeSetNumCols(v, 5);
      pParse->nMem = 5;
      sqlite3CodeVerifySchema(pParse, iDb);
      sqlite3VdbeSetColName(v, 0, COLNAME_NAME, "seq", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 1, COLNAME_NAME, "name", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 2, COLNAME_NAME, "unique", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 3, COLNAME_NAME, "origin", SQLITE_STATIC);
      sqlite3VdbeSetColName(v, 4, COLNAME_NAME, "partial", SQLITE_STATIC);
      for(pIdx=pTab->pIndex, i=0; pIdx; pIdx=pIdx->pNext, i++){
        const char *azOrigin[] = { "c", "u", "pk" };
        sqlite3VdbeAddOp2(v, OP_Integer, i, 1);
        sqlite3VdbeAddOp4(v, OP_String8, 0, 2, 0, pIdx->zName, 0);
        sqlite3VdbeAddOp2(v, OP_Integer, IsUniqueIndex(pIdx), 3);
        sqlite3VdbeAddOp4(v, OP_String8, 0, 4, 0, azOrigin[pIdx->idxType], 0);
        sqlite3VdbeAddOp2(v, OP_Integer, pIdx->pPartIdxWhere!=0, 5);
        sqlite3VdbeAddOp2(v, OP_ResultRow, 1, 5);
      }
    }
  }
  break;

  case PragTyp_DATABASE_LIST: {
    int i;
104141
104142
104143
104144
104145
104146
104147
104148
104149
104150
104151
104152
104153
104154
104155
104156
104157
  ** the schema-version is potentially dangerous and may lead to program
  ** crashes or database corruption. Use with caution!
  **
  ** The user-version is not used internally by SQLite. It may be used by
  ** applications for any purpose.
  */
  case PragTyp_HEADER_VALUE: {
    int iCookie = aPragmaNames[mid].iArg;  /* Which cookie to read or write */
    sqlite3VdbeUsesBtree(v, iDb);
    if( zRight && (aPragmaNames[mid].mPragFlag & PragFlag_ReadOnly)==0 ){
      /* Write the specified cookie value */
      static const VdbeOpList setCookie[] = {
        { OP_Transaction,    0,  1,  0},    /* 0 */
        { OP_Integer,        0,  1,  0},    /* 1 */
        { OP_SetCookie,      0,  0,  1},    /* 2 */
      };
      int addr = sqlite3VdbeAddOpList(v, ArraySize(setCookie), setCookie, 0);







|

|







104351
104352
104353
104354
104355
104356
104357
104358
104359
104360
104361
104362
104363
104364
104365
104366
104367
  ** the schema-version is potentially dangerous and may lead to program
  ** crashes or database corruption. Use with caution!
  **
  ** The user-version is not used internally by SQLite. It may be used by
  ** applications for any purpose.
  */
  case PragTyp_HEADER_VALUE: {
    int iCookie = pPragma->iArg;  /* Which cookie to read or write */
    sqlite3VdbeUsesBtree(v, iDb);
    if( zRight && (pPragma->mPragFlag & PragFlag_ReadOnly)==0 ){
      /* Write the specified cookie value */
      static const VdbeOpList setCookie[] = {
        { OP_Transaction,    0,  1,  0},    /* 0 */
        { OP_Integer,        0,  1,  0},    /* 1 */
        { OP_SetCookie,      0,  0,  1},    /* 2 */
      };
      int addr = sqlite3VdbeAddOpList(v, ArraySize(setCookie), setCookie, 0);
104263
104264
104265
104266
104267
104268
104269
104270
104271
104272
104273
104274
104275
104276
104277
  **
  ** Call sqlite3_busy_timeout(db, N).  Return the current timeout value
  ** if one is set.  If no busy handler or a different busy handler is set
  ** then 0 is returned.  Setting the busy_timeout to 0 or negative
  ** disables the timeout.
  */
  /*case PragTyp_BUSY_TIMEOUT*/ default: {
    assert( aPragmaNames[mid].ePragTyp==PragTyp_BUSY_TIMEOUT );
    if( zRight ){
      sqlite3_busy_timeout(db, sqlite3Atoi(zRight));
    }
    returnSingleInt(pParse, "timeout",  db->busyTimeout);
    break;
  }








|







104473
104474
104475
104476
104477
104478
104479
104480
104481
104482
104483
104484
104485
104486
104487
  **
  ** Call sqlite3_busy_timeout(db, N).  Return the current timeout value
  ** if one is set.  If no busy handler or a different busy handler is set
  ** then 0 is returned.  Setting the busy_timeout to 0 or negative
  ** disables the timeout.
  */
  /*case PragTyp_BUSY_TIMEOUT*/ default: {
    assert( pPragma->ePragTyp==PragTyp_BUSY_TIMEOUT );
    if( zRight ){
      sqlite3_busy_timeout(db, sqlite3Atoi(zRight));
    }
    returnSingleInt(pParse, "timeout",  db->busyTimeout);
    break;
  }

108484
108485
108486
108487
108488
108489
108490
108491



108492
108493
108494
108495
108496
108497
108498
** exist on the table t1, a complete scan of the data might be
** avoided.
**
** Flattening is only attempted if all of the following are true:
**
**   (1)  The subquery and the outer query do not both use aggregates.
**
**   (2)  The subquery is not an aggregate or the outer query is not a join.



**
**   (3)  The subquery is not the right operand of a left outer join
**        (Originally ticket #306.  Strengthened by ticket #3300)
**
**   (4)  The subquery is not DISTINCT.
**
**  (**)  At one point restrictions (4) and (5) defined a subset of DISTINCT







|
>
>
>







108694
108695
108696
108697
108698
108699
108700
108701
108702
108703
108704
108705
108706
108707
108708
108709
108710
108711
** exist on the table t1, a complete scan of the data might be
** avoided.
**
** Flattening is only attempted if all of the following are true:
**
**   (1)  The subquery and the outer query do not both use aggregates.
**
**   (2)  The subquery is not an aggregate or (2a) the outer query is not a join
**        and (2b) the outer query does not use subqueries other than the one
**        FROM-clause subquery that is a candidate for flattening.  (2b is
**        due to ticket [2f7170d73bf9abf80] from 2015-02-09.)
**
**   (3)  The subquery is not the right operand of a left outer join
**        (Originally ticket #306.  Strengthened by ticket #3300)
**
**   (4)  The subquery is not DISTINCT.
**
**  (**)  At one point restrictions (4) and (5) defined a subset of DISTINCT
108621
108622
108623
108624
108625
108626
108627

108628
108629








108630
108631
108632
108633
108634
108635
108636
  if( OptimizationDisabled(db, SQLITE_QueryFlattener) ) return 0;
  pSrc = p->pSrc;
  assert( pSrc && iFrom>=0 && iFrom<pSrc->nSrc );
  pSubitem = &pSrc->a[iFrom];
  iParent = pSubitem->iCursor;
  pSub = pSubitem->pSelect;
  assert( pSub!=0 );

  if( isAgg && subqueryIsAgg ) return 0;                 /* Restriction (1)  */
  if( subqueryIsAgg && pSrc->nSrc>1 ) return 0;          /* Restriction (2)  */








  pSubSrc = pSub->pSrc;
  assert( pSubSrc );
  /* Prior to version 3.1.2, when LIMIT and OFFSET had to be simple constants,
  ** not arbitrary expressions, we allowed some combining of LIMIT and OFFSET
  ** because they could be computed at compile-time.  But when LIMIT and OFFSET
  ** became arbitrary expressions, we were forced to add restrictions (13)
  ** and (14). */







>
|
|
>
>
>
>
>
>
>
>







108834
108835
108836
108837
108838
108839
108840
108841
108842
108843
108844
108845
108846
108847
108848
108849
108850
108851
108852
108853
108854
108855
108856
108857
108858
  if( OptimizationDisabled(db, SQLITE_QueryFlattener) ) return 0;
  pSrc = p->pSrc;
  assert( pSrc && iFrom>=0 && iFrom<pSrc->nSrc );
  pSubitem = &pSrc->a[iFrom];
  iParent = pSubitem->iCursor;
  pSub = pSubitem->pSelect;
  assert( pSub!=0 );
  if( subqueryIsAgg ){
    if( isAgg ) return 0;                                /* Restriction (1)   */
    if( pSrc->nSrc>1 ) return 0;                         /* Restriction (2a)  */
    if( (p->pWhere && ExprHasProperty(p->pWhere,EP_Subquery))
     || (sqlite3ExprListFlags(p->pEList) & EP_Subquery)!=0
     || (sqlite3ExprListFlags(p->pOrderBy) & EP_Subquery)!=0
    ){
      return 0;                                          /* Restriction (2b)  */
    }
  }
    
  pSubSrc = pSub->pSrc;
  assert( pSubSrc );
  /* Prior to version 3.1.2, when LIMIT and OFFSET had to be simple constants,
  ** not arbitrary expressions, we allowed some combining of LIMIT and OFFSET
  ** because they could be computed at compile-time.  But when LIMIT and OFFSET
  ** became arbitrary expressions, we were forced to add restrictions (13)
  ** and (14). */
109443
109444
109445
109446
109447
109448
109449
109450
109451
109452
109453
109454
109455
109456
109457
#endif
    if( pFrom->zName==0 ){
#ifndef SQLITE_OMIT_SUBQUERY
      Select *pSel = pFrom->pSelect;
      /* A sub-query in the FROM clause of a SELECT */
      assert( pSel!=0 );
      assert( pFrom->pTab==0 );
      sqlite3WalkSelect(pWalker, pSel);
      pFrom->pTab = pTab = sqlite3DbMallocZero(db, sizeof(Table));
      if( pTab==0 ) return WRC_Abort;
      pTab->nRef = 1;
      pTab->zName = sqlite3MPrintf(db, "sqlite_sq_%p", (void*)pTab);
      while( pSel->pPrior ){ pSel = pSel->pPrior; }
      selectColumnsFromExprList(pParse, pSel->pEList, &pTab->nCol, &pTab->aCol);
      pTab->iPKey = -1;







|







109665
109666
109667
109668
109669
109670
109671
109672
109673
109674
109675
109676
109677
109678
109679
#endif
    if( pFrom->zName==0 ){
#ifndef SQLITE_OMIT_SUBQUERY
      Select *pSel = pFrom->pSelect;
      /* A sub-query in the FROM clause of a SELECT */
      assert( pSel!=0 );
      assert( pFrom->pTab==0 );
      if( sqlite3WalkSelect(pWalker, pSel) ) return WRC_Abort;
      pFrom->pTab = pTab = sqlite3DbMallocZero(db, sizeof(Table));
      if( pTab==0 ) return WRC_Abort;
      pTab->nRef = 1;
      pTab->zName = sqlite3MPrintf(db, "sqlite_sq_%p", (void*)pTab);
      while( pSel->pPrior ){ pSel = pSel->pPrior; }
      selectColumnsFromExprList(pParse, pSel->pEList, &pTab->nCol, &pTab->aCol);
      pTab->iPKey = -1;
110042
110043
110044
110045
110046
110047
110048







110049
110050
110051
110052
110053
110054
110055
  pTabList = p->pSrc;
  pEList = p->pEList;
  if( pParse->nErr || db->mallocFailed ){
    goto select_end;
  }
  isAgg = (p->selFlags & SF_Aggregate)!=0;
  assert( pEList!=0 );








  /* Begin generating code.
  */
  v = sqlite3GetVdbe(pParse);
  if( v==0 ) goto select_end;

  /* If writing to memory or generating a set







>
>
>
>
>
>
>







110264
110265
110266
110267
110268
110269
110270
110271
110272
110273
110274
110275
110276
110277
110278
110279
110280
110281
110282
110283
110284
  pTabList = p->pSrc;
  pEList = p->pEList;
  if( pParse->nErr || db->mallocFailed ){
    goto select_end;
  }
  isAgg = (p->selFlags & SF_Aggregate)!=0;
  assert( pEList!=0 );
#if SELECTTRACE_ENABLED
  if( sqlite3SelectTrace & 0x100 ){
    SELECTTRACE(0x100,pParse,p, ("after name resolution:\n"));
    sqlite3TreeViewSelect(0, p, 0);
  }
#endif


  /* Begin generating code.
  */
  v = sqlite3GetVdbe(pParse);
  if( v==0 ) goto select_end;

  /* If writing to memory or generating a set
110787
110788
110789
110790
110791
110792
110793
110794
110795
110796
110797
110798
110799
110800
110801
110802
110803
#ifdef SQLITE_DEBUG
/*
** Generate a human-readable description of a the Select object.
*/
SQLITE_PRIVATE void sqlite3TreeViewSelect(TreeView *pView, const Select *p, u8 moreToFollow){
  int n = 0;
  pView = sqlite3TreeViewPush(pView, moreToFollow);
  sqlite3TreeViewLine(pView, "SELECT%s%s",
    ((p->selFlags & SF_Distinct) ? " DISTINCT" : ""),
    ((p->selFlags & SF_Aggregate) ? " agg_flag" : "")
  );
  if( p->pSrc && p->pSrc->nSrc ) n++;
  if( p->pWhere ) n++;
  if( p->pGroupBy ) n++;
  if( p->pHaving ) n++;
  if( p->pOrderBy ) n++;
  if( p->pLimit ) n++;







|

|







111016
111017
111018
111019
111020
111021
111022
111023
111024
111025
111026
111027
111028
111029
111030
111031
111032
#ifdef SQLITE_DEBUG
/*
** Generate a human-readable description of a the Select object.
*/
SQLITE_PRIVATE void sqlite3TreeViewSelect(TreeView *pView, const Select *p, u8 moreToFollow){
  int n = 0;
  pView = sqlite3TreeViewPush(pView, moreToFollow);
  sqlite3TreeViewLine(pView, "SELECT%s%s (0x%p)",
    ((p->selFlags & SF_Distinct) ? " DISTINCT" : ""),
    ((p->selFlags & SF_Aggregate) ? " agg_flag" : ""), p
  );
  if( p->pSrc && p->pSrc->nSrc ) n++;
  if( p->pWhere ) n++;
  if( p->pGroupBy ) n++;
  if( p->pHaving ) n++;
  if( p->pOrderBy ) n++;
  if( p->pLimit ) n++;
113159
113160
113161
113162
113163
113164
113165
113166
113167
113168
113169
113170
113171
113172
113173

  /* The call to execSql() to attach the temp database has left the file
  ** locked (as there was more than one active statement when the transaction
  ** to read the schema was concluded. Unlock it here so that this doesn't
  ** cause problems for the call to BtreeSetPageSize() below.  */
  sqlite3BtreeCommit(pTemp);

  nRes = sqlite3BtreeGetReserve(pMain);

  /* A VACUUM cannot change the pagesize of an encrypted database. */
#ifdef SQLITE_HAS_CODEC
  if( db->nextPagesize ){
    extern void sqlite3CodecGetKey(sqlite3*, int, void**, int*);
    int nKey;
    char *zKey;







|







113388
113389
113390
113391
113392
113393
113394
113395
113396
113397
113398
113399
113400
113401
113402

  /* The call to execSql() to attach the temp database has left the file
  ** locked (as there was more than one active statement when the transaction
  ** to read the schema was concluded. Unlock it here so that this doesn't
  ** cause problems for the call to BtreeSetPageSize() below.  */
  sqlite3BtreeCommit(pTemp);

  nRes = sqlite3BtreeGetOptimalReserve(pMain);

  /* A VACUUM cannot change the pagesize of an encrypted database. */
#ifdef SQLITE_HAS_CODEC
  if( db->nextPagesize ){
    extern void sqlite3CodecGetKey(sqlite3*, int, void**, int*);
    int nKey;
    char *zKey;
114056
114057
114058
114059
114060
114061
114062
114063


114064
114065
114066
114067
114068
114069
114070
  Parse *pParse;

  int rc = SQLITE_OK;
  Table *pTab;
  char *zErr = 0;

#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT;


#endif
  sqlite3_mutex_enter(db->mutex);
  if( !db->pVtabCtx || !(pTab = db->pVtabCtx->pTab) ){
    sqlite3Error(db, SQLITE_MISUSE);
    sqlite3_mutex_leave(db->mutex);
    return SQLITE_MISUSE_BKPT;
  }







|
>
>







114285
114286
114287
114288
114289
114290
114291
114292
114293
114294
114295
114296
114297
114298
114299
114300
114301
  Parse *pParse;

  int rc = SQLITE_OK;
  Table *pTab;
  char *zErr = 0;

#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) || zCreateTable==0 ){
    return SQLITE_MISUSE_BKPT;
  }
#endif
  sqlite3_mutex_enter(db->mutex);
  if( !db->pVtabCtx || !(pTab = db->pVtabCtx->pTab) ){
    sqlite3Error(db, SQLITE_MISUSE);
    sqlite3_mutex_leave(db->mutex);
    return SQLITE_MISUSE_BKPT;
  }
116545
116546
116547
116548
116549
116550
116551




116552
116553
116554
116555
116556
116557
116558
116559
116560
116561
116562
116563
116564
  ** and used to match WHERE clause constraints */
  nKeyCol = 0;
  pTable = pSrc->pTab;
  pWCEnd = &pWC->a[pWC->nTerm];
  pLoop = pLevel->pWLoop;
  idxCols = 0;
  for(pTerm=pWC->a; pTerm<pWCEnd; pTerm++){




    if( pLoop->prereq==0
     && (pTerm->wtFlags & TERM_VIRTUAL)==0
     && !ExprHasProperty(pTerm->pExpr, EP_FromJoin)
     && sqlite3ExprIsTableConstant(pTerm->pExpr, pSrc->iCursor) ){
      pPartial = sqlite3ExprAnd(pParse->db, pPartial,
                                sqlite3ExprDup(pParse->db, pTerm->pExpr, 0));
    }
    if( termCanDriveIndex(pTerm, pSrc, notReady) ){
      int iCol = pTerm->u.leftColumn;
      Bitmask cMask = iCol>=BMS ? MASKBIT(BMS-1) : MASKBIT(iCol);
      testcase( iCol==BMS );
      testcase( iCol==BMS-1 );
      if( !sentWarning ){







>
>
>
>


|
|

|







116776
116777
116778
116779
116780
116781
116782
116783
116784
116785
116786
116787
116788
116789
116790
116791
116792
116793
116794
116795
116796
116797
116798
116799
  ** and used to match WHERE clause constraints */
  nKeyCol = 0;
  pTable = pSrc->pTab;
  pWCEnd = &pWC->a[pWC->nTerm];
  pLoop = pLevel->pWLoop;
  idxCols = 0;
  for(pTerm=pWC->a; pTerm<pWCEnd; pTerm++){
    Expr *pExpr = pTerm->pExpr;
    assert( !ExprHasProperty(pExpr, EP_FromJoin)    /* prereq always non-zero */
         || pExpr->iRightJoinTable!=pSrc->iCursor   /*   for the right-hand   */
         || pLoop->prereq!=0 );                     /*   table of a LEFT JOIN */
    if( pLoop->prereq==0
     && (pTerm->wtFlags & TERM_VIRTUAL)==0
     && !ExprHasProperty(pExpr, EP_FromJoin)
     && sqlite3ExprIsTableConstant(pExpr, pSrc->iCursor) ){
      pPartial = sqlite3ExprAnd(pParse->db, pPartial,
                                sqlite3ExprDup(pParse->db, pExpr, 0));
    }
    if( termCanDriveIndex(pTerm, pSrc, notReady) ){
      int iCol = pTerm->u.leftColumn;
      Bitmask cMask = iCol>=BMS ? MASKBIT(BMS-1) : MASKBIT(iCol);
      testcase( iCol==BMS );
      testcase( iCol==BMS-1 );
      if( !sentWarning ){
119628
119629
119630
119631
119632
119633
119634

119635
119636
119637
119638
119639
119640
119641
119642
119643
/* Check to see if a partial index with pPartIndexWhere can be used
** in the current query.  Return true if it can be and false if not.
*/
static int whereUsablePartialIndex(int iTab, WhereClause *pWC, Expr *pWhere){
  int i;
  WhereTerm *pTerm;
  for(i=0, pTerm=pWC->a; i<pWC->nTerm; i++, pTerm++){

    if( sqlite3ExprImpliesExpr(pTerm->pExpr, pWhere, iTab)
     && !ExprHasProperty(pTerm->pExpr, EP_FromJoin)
    ){
      return 1;
    }
  }
  return 0;
}








>
|
|







119863
119864
119865
119866
119867
119868
119869
119870
119871
119872
119873
119874
119875
119876
119877
119878
119879
/* Check to see if a partial index with pPartIndexWhere can be used
** in the current query.  Return true if it can be and false if not.
*/
static int whereUsablePartialIndex(int iTab, WhereClause *pWC, Expr *pWhere){
  int i;
  WhereTerm *pTerm;
  for(i=0, pTerm=pWC->a; i<pWC->nTerm; i++, pTerm++){
    Expr *pExpr = pTerm->pExpr;
    if( sqlite3ExprImpliesExpr(pExpr, pWhere, iTab) 
     && (!ExprHasProperty(pExpr, EP_FromJoin) || pExpr->iRightJoinTable==iTab)
    ){
      return 1;
    }
  }
  return 0;
}

124574
124575
124576
124577
124578
124579
124580
124581
124582
124583
124584
124585
124586
124587
124588
124589
124590
124591
124592
124593
124594
124595
124596
124597
124598
124599
124600
124601
124602
124603
124604
124605
124606
124607
124608
124609
124610
124611
124612
124613
124614
124615
124616
124617
124618
124619
124620
124621
124622
124623
124624
124625
124626
124627
124628
124629
124630
124631
124632
124633
124634
124635
124636
124637
124638
124639
124640
124641
124642
124643
124644
124645
124646
124647
124648
124649
124650
124651
124652
124653
124654
124655
124656
124657
124658
124659
124660
124661
124662
        pRHS->flags |= EP_Generic;
      }
      yygotominor.yy346.pExpr = sqlite3PExpr(pParse, yymsp[-3].minor.yy328 ? TK_NE : TK_EQ, yymsp[-4].minor.yy346.pExpr, pRHS, 0);
    }else{
      yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy346.pExpr, 0, 0);
      if( yygotominor.yy346.pExpr ){
        yygotominor.yy346.pExpr->x.pList = yymsp[-1].minor.yy14;
        sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr);
      }else{
        sqlite3ExprListDelete(pParse->db, yymsp[-1].minor.yy14);
      }
      if( yymsp[-3].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0);
    }
    yygotominor.yy346.zStart = yymsp[-4].minor.yy346.zStart;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 224: /* expr ::= LP select RP */
{
    yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_SELECT, 0, 0, 0);
    if( yygotominor.yy346.pExpr ){
      yygotominor.yy346.pExpr->x.pSelect = yymsp[-1].minor.yy3;
      ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect);
      sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr);
    }else{
      sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3);
    }
    yygotominor.yy346.zStart = yymsp[-2].minor.yy0.z;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 225: /* expr ::= expr in_op LP select RP */
{
    yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy346.pExpr, 0, 0);
    if( yygotominor.yy346.pExpr ){
      yygotominor.yy346.pExpr->x.pSelect = yymsp[-1].minor.yy3;
      ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect);
      sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr);
    }else{
      sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3);
    }
    if( yymsp[-3].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0);
    yygotominor.yy346.zStart = yymsp[-4].minor.yy346.zStart;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 226: /* expr ::= expr in_op nm dbnm */
{
    SrcList *pSrc = sqlite3SrcListAppend(pParse->db, 0,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0);
    yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-3].minor.yy346.pExpr, 0, 0);
    if( yygotominor.yy346.pExpr ){
      yygotominor.yy346.pExpr->x.pSelect = sqlite3SelectNew(pParse, 0,pSrc,0,0,0,0,0,0,0);
      ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect);
      sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr);
    }else{
      sqlite3SrcListDelete(pParse->db, pSrc);
    }
    if( yymsp[-2].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0);
    yygotominor.yy346.zStart = yymsp[-3].minor.yy346.zStart;
    yygotominor.yy346.zEnd = yymsp[0].minor.yy0.z ? &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] : &yymsp[-1].minor.yy0.z[yymsp[-1].minor.yy0.n];
  }
        break;
      case 227: /* expr ::= EXISTS LP select RP */
{
    Expr *p = yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_EXISTS, 0, 0, 0);
    if( p ){
      p->x.pSelect = yymsp[-1].minor.yy3;
      ExprSetProperty(p, EP_xIsSelect);
      sqlite3ExprSetHeight(pParse, p);
    }else{
      sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3);
    }
    yygotominor.yy346.zStart = yymsp[-3].minor.yy0.z;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 228: /* expr ::= CASE case_operand case_exprlist case_else END */
{
  yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_CASE, yymsp[-3].minor.yy132, 0, 0);
  if( yygotominor.yy346.pExpr ){
    yygotominor.yy346.pExpr->x.pList = yymsp[-1].minor.yy132 ? sqlite3ExprListAppend(pParse,yymsp[-2].minor.yy14,yymsp[-1].minor.yy132) : yymsp[-2].minor.yy14;
    sqlite3ExprSetHeight(pParse, yygotominor.yy346.pExpr);
  }else{
    sqlite3ExprListDelete(pParse->db, yymsp[-2].minor.yy14);
    sqlite3ExprDelete(pParse->db, yymsp[-1].minor.yy132);
  }
  yygotominor.yy346.zStart = yymsp[-4].minor.yy0.z;
  yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
}







|














|
|












|
|














|
|













|
|












|







124810
124811
124812
124813
124814
124815
124816
124817
124818
124819
124820
124821
124822
124823
124824
124825
124826
124827
124828
124829
124830
124831
124832
124833
124834
124835
124836
124837
124838
124839
124840
124841
124842
124843
124844
124845
124846
124847
124848
124849
124850
124851
124852
124853
124854
124855
124856
124857
124858
124859
124860
124861
124862
124863
124864
124865
124866
124867
124868
124869
124870
124871
124872
124873
124874
124875
124876
124877
124878
124879
124880
124881
124882
124883
124884
124885
124886
124887
124888
124889
124890
124891
124892
124893
124894
124895
124896
124897
124898
        pRHS->flags |= EP_Generic;
      }
      yygotominor.yy346.pExpr = sqlite3PExpr(pParse, yymsp[-3].minor.yy328 ? TK_NE : TK_EQ, yymsp[-4].minor.yy346.pExpr, pRHS, 0);
    }else{
      yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy346.pExpr, 0, 0);
      if( yygotominor.yy346.pExpr ){
        yygotominor.yy346.pExpr->x.pList = yymsp[-1].minor.yy14;
        sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy346.pExpr);
      }else{
        sqlite3ExprListDelete(pParse->db, yymsp[-1].minor.yy14);
      }
      if( yymsp[-3].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0);
    }
    yygotominor.yy346.zStart = yymsp[-4].minor.yy346.zStart;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 224: /* expr ::= LP select RP */
{
    yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_SELECT, 0, 0, 0);
    if( yygotominor.yy346.pExpr ){
      yygotominor.yy346.pExpr->x.pSelect = yymsp[-1].minor.yy3;
      ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect|EP_Subquery);
      sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy346.pExpr);
    }else{
      sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3);
    }
    yygotominor.yy346.zStart = yymsp[-2].minor.yy0.z;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 225: /* expr ::= expr in_op LP select RP */
{
    yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-4].minor.yy346.pExpr, 0, 0);
    if( yygotominor.yy346.pExpr ){
      yygotominor.yy346.pExpr->x.pSelect = yymsp[-1].minor.yy3;
      ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect|EP_Subquery);
      sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy346.pExpr);
    }else{
      sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3);
    }
    if( yymsp[-3].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0);
    yygotominor.yy346.zStart = yymsp[-4].minor.yy346.zStart;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 226: /* expr ::= expr in_op nm dbnm */
{
    SrcList *pSrc = sqlite3SrcListAppend(pParse->db, 0,&yymsp[-1].minor.yy0,&yymsp[0].minor.yy0);
    yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_IN, yymsp[-3].minor.yy346.pExpr, 0, 0);
    if( yygotominor.yy346.pExpr ){
      yygotominor.yy346.pExpr->x.pSelect = sqlite3SelectNew(pParse, 0,pSrc,0,0,0,0,0,0,0);
      ExprSetProperty(yygotominor.yy346.pExpr, EP_xIsSelect|EP_Subquery);
      sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy346.pExpr);
    }else{
      sqlite3SrcListDelete(pParse->db, pSrc);
    }
    if( yymsp[-2].minor.yy328 ) yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_NOT, yygotominor.yy346.pExpr, 0, 0);
    yygotominor.yy346.zStart = yymsp[-3].minor.yy346.zStart;
    yygotominor.yy346.zEnd = yymsp[0].minor.yy0.z ? &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n] : &yymsp[-1].minor.yy0.z[yymsp[-1].minor.yy0.n];
  }
        break;
      case 227: /* expr ::= EXISTS LP select RP */
{
    Expr *p = yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_EXISTS, 0, 0, 0);
    if( p ){
      p->x.pSelect = yymsp[-1].minor.yy3;
      ExprSetProperty(p, EP_xIsSelect|EP_Subquery);
      sqlite3ExprSetHeightAndFlags(pParse, p);
    }else{
      sqlite3SelectDelete(pParse->db, yymsp[-1].minor.yy3);
    }
    yygotominor.yy346.zStart = yymsp[-3].minor.yy0.z;
    yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
  }
        break;
      case 228: /* expr ::= CASE case_operand case_exprlist case_else END */
{
  yygotominor.yy346.pExpr = sqlite3PExpr(pParse, TK_CASE, yymsp[-3].minor.yy132, 0, 0);
  if( yygotominor.yy346.pExpr ){
    yygotominor.yy346.pExpr->x.pList = yymsp[-1].minor.yy132 ? sqlite3ExprListAppend(pParse,yymsp[-2].minor.yy14,yymsp[-1].minor.yy132) : yymsp[-2].minor.yy14;
    sqlite3ExprSetHeightAndFlags(pParse, yygotominor.yy346.pExpr);
  }else{
    sqlite3ExprListDelete(pParse->db, yymsp[-2].minor.yy14);
    sqlite3ExprDelete(pParse->db, yymsp[-1].minor.yy132);
  }
  yygotominor.yy346.zStart = yymsp[-4].minor.yy0.z;
  yygotominor.yy346.zEnd = &yymsp[0].minor.yy0.z[yymsp[0].minor.yy0.n];
}
125890
125891
125892
125893
125894
125895
125896
125897
125898
125899
125900
125901
125902
125903
125904
125905
125906
125907
  void *pEngine;                  /* The LEMON-generated LALR(1) parser */
  int tokenType;                  /* type of the next token */
  int lastTokenParsed = -1;       /* type of the previous token */
  u8 enableLookaside;             /* Saved value of db->lookaside.bEnabled */
  sqlite3 *db = pParse->db;       /* The database connection */
  int mxSqlLen;                   /* Max length of an SQL string */


#ifdef SQLITE_ENABLE_API_ARMOR
  if( zSql==0 || pzErrMsg==0 ) return SQLITE_MISUSE_BKPT;
#endif
  mxSqlLen = db->aLimit[SQLITE_LIMIT_SQL_LENGTH];
  if( db->nVdbeActive==0 ){
    db->u1.isInterrupted = 0;
  }
  pParse->rc = SQLITE_OK;
  pParse->zTail = zSql;
  i = 0;







|
<
<
<







126126
126127
126128
126129
126130
126131
126132
126133



126134
126135
126136
126137
126138
126139
126140
  void *pEngine;                  /* The LEMON-generated LALR(1) parser */
  int tokenType;                  /* type of the next token */
  int lastTokenParsed = -1;       /* type of the previous token */
  u8 enableLookaside;             /* Saved value of db->lookaside.bEnabled */
  sqlite3 *db = pParse->db;       /* The database connection */
  int mxSqlLen;                   /* Max length of an SQL string */

  assert( zSql!=0 );



  mxSqlLen = db->aLimit[SQLITE_LIMIT_SQL_LENGTH];
  if( db->nVdbeActive==0 ){
    db->u1.isInterrupted = 0;
  }
  pParse->rc = SQLITE_OK;
  pParse->zTail = zSql;
  i = 0;
127821
127822
127823
127824
127825
127826
127827
127828
127829
127830
127831
127832
127833
127834
127835
*/
SQLITE_API int sqlite3_busy_handler(
  sqlite3 *db,
  int (*xBusy)(void*,int),
  void *pArg
){
#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE;
#endif
  sqlite3_mutex_enter(db->mutex);
  db->busyHandler.xFunc = xBusy;
  db->busyHandler.pArg = pArg;
  db->busyHandler.nBusy = 0;
  db->busyTimeout = 0;
  sqlite3_mutex_leave(db->mutex);







|







128054
128055
128056
128057
128058
128059
128060
128061
128062
128063
128064
128065
128066
128067
128068
*/
SQLITE_API int sqlite3_busy_handler(
  sqlite3 *db,
  int (*xBusy)(void*,int),
  void *pArg
){
#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) ) return SQLITE_MISUSE_BKPT;
#endif
  sqlite3_mutex_enter(db->mutex);
  db->busyHandler.xFunc = xBusy;
  db->busyHandler.pArg = pArg;
  db->busyHandler.nBusy = 0;
  db->busyTimeout = 0;
  sqlite3_mutex_leave(db->mutex);
129114
129115
129116
129117
129118
129119
129120



129121
129122
129123
129124
129125
129126
129127
  db->szMmap = sqlite3GlobalConfig.szMmap;
  db->nextPagesize = 0;
  db->nMaxSorterMmap = 0x7FFFFFFF;
  db->flags |= SQLITE_ShortColNames | SQLITE_EnableTrigger | SQLITE_CacheSpill
#if !defined(SQLITE_DEFAULT_AUTOMATIC_INDEX) || SQLITE_DEFAULT_AUTOMATIC_INDEX
                 | SQLITE_AutoIndex
#endif



#if SQLITE_DEFAULT_FILE_FORMAT<4
                 | SQLITE_LegacyFileFmt
#endif
#ifdef SQLITE_ENABLE_LOAD_EXTENSION
                 | SQLITE_LoadExtension
#endif
#if SQLITE_DEFAULT_RECURSIVE_TRIGGERS







>
>
>







129347
129348
129349
129350
129351
129352
129353
129354
129355
129356
129357
129358
129359
129360
129361
129362
129363
  db->szMmap = sqlite3GlobalConfig.szMmap;
  db->nextPagesize = 0;
  db->nMaxSorterMmap = 0x7FFFFFFF;
  db->flags |= SQLITE_ShortColNames | SQLITE_EnableTrigger | SQLITE_CacheSpill
#if !defined(SQLITE_DEFAULT_AUTOMATIC_INDEX) || SQLITE_DEFAULT_AUTOMATIC_INDEX
                 | SQLITE_AutoIndex
#endif
#if SQLITE_DEFAULT_CKPTFULLFSYNC
                 | SQLITE_CkptFullFSync
#endif
#if SQLITE_DEFAULT_FILE_FORMAT<4
                 | SQLITE_LegacyFileFmt
#endif
#ifdef SQLITE_ENABLE_LOAD_EXTENSION
                 | SQLITE_LoadExtension
#endif
#if SQLITE_DEFAULT_RECURSIVE_TRIGGERS
129549
129550
129551
129552
129553
129554
129555
129556
129557
129558
129559
129560
129561
129562







129563
129564
129565
129566
129567
129568
129569
  int *pAutoinc               /* OUTPUT: True if column is auto-increment */
){
  int rc;
  char *zErrMsg = 0;
  Table *pTab = 0;
  Column *pCol = 0;
  int iCol = 0;

  char const *zDataType = 0;
  char const *zCollSeq = 0;
  int notnull = 0;
  int primarykey = 0;
  int autoinc = 0;








  /* Ensure the database schema has been loaded */
  sqlite3_mutex_enter(db->mutex);
  sqlite3BtreeEnterAll(db);
  rc = sqlite3Init(db, &zErrMsg);
  if( SQLITE_OK!=rc ){
    goto error_out;
  }







<






>
>
>
>
>
>
>







129785
129786
129787
129788
129789
129790
129791

129792
129793
129794
129795
129796
129797
129798
129799
129800
129801
129802
129803
129804
129805
129806
129807
129808
129809
129810
129811
  int *pAutoinc               /* OUTPUT: True if column is auto-increment */
){
  int rc;
  char *zErrMsg = 0;
  Table *pTab = 0;
  Column *pCol = 0;
  int iCol = 0;

  char const *zDataType = 0;
  char const *zCollSeq = 0;
  int notnull = 0;
  int primarykey = 0;
  int autoinc = 0;


#ifdef SQLITE_ENABLE_API_ARMOR
  if( !sqlite3SafetyCheckOk(db) || zTableName==0 ){
    return SQLITE_MISUSE_BKPT;
  }
#endif

  /* Ensure the database schema has been loaded */
  sqlite3_mutex_enter(db->mutex);
  sqlite3BtreeEnterAll(db);
  rc = sqlite3Init(db, &zErrMsg);
  if( SQLITE_OK!=rc ){
    goto error_out;
  }
129702
129703
129704
129705
129706
129707
129708
129709
129710
129711
129712
129713
129714
129715
129716
      rc = sqlite3OsFileControl(fd, op, pArg);
    }else{
      rc = SQLITE_NOTFOUND;
    }
    sqlite3BtreeLeave(pBtree);
  }
  sqlite3_mutex_leave(db->mutex);
  return rc;   
}

/*
** Interface to the testing logic.
*/
SQLITE_API int sqlite3_test_control(int op, ...){
  int rc = 0;







|







129944
129945
129946
129947
129948
129949
129950
129951
129952
129953
129954
129955
129956
129957
129958
      rc = sqlite3OsFileControl(fd, op, pArg);
    }else{
      rc = SQLITE_NOTFOUND;
    }
    sqlite3BtreeLeave(pBtree);
  }
  sqlite3_mutex_leave(db->mutex);
  return rc;
}

/*
** Interface to the testing logic.
*/
SQLITE_API int sqlite3_test_control(int op, ...){
  int rc = 0;
130005
130006
130007
130008
130009
130010
130011





























130012
130013
130014
130015
130016
130017
130018
    ** Return SQLITE_OK if SQLite has been initialized and SQLITE_ERROR if
    ** not.
    */
    case SQLITE_TESTCTRL_ISINIT: {
      if( sqlite3GlobalConfig.isInit==0 ) rc = SQLITE_ERROR;
      break;
    }





























  }
  va_end(ap);
#endif /* SQLITE_OMIT_BUILTIN_TEST */
  return rc;
}

/*







>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>







130247
130248
130249
130250
130251
130252
130253
130254
130255
130256
130257
130258
130259
130260
130261
130262
130263
130264
130265
130266
130267
130268
130269
130270
130271
130272
130273
130274
130275
130276
130277
130278
130279
130280
130281
130282
130283
130284
130285
130286
130287
130288
130289
    ** Return SQLITE_OK if SQLite has been initialized and SQLITE_ERROR if
    ** not.
    */
    case SQLITE_TESTCTRL_ISINIT: {
      if( sqlite3GlobalConfig.isInit==0 ) rc = SQLITE_ERROR;
      break;
    }

    /*   sqlite3_test_control(SQLITE_TESTCTRL_IMPOSTER, db, dbName, onOff, tnum);
    **
    ** This test control is used to create imposter tables.  "db" is a pointer
    ** to the database connection.  dbName is the database name (ex: "main" or
    ** "temp") which will receive the imposter.  "onOff" turns imposter mode on
    ** or off.  "tnum" is the root page of the b-tree to which the imposter
    ** table should connect.
    **
    ** Enable imposter mode only when the schema has already been parsed.  Then
    ** run a single CREATE TABLE statement to construct the imposter table in the
    ** parsed schema.  Then turn imposter mode back off again.
    **
    ** If onOff==0 and tnum>0 then reset the schema for all databases, causing
    ** the schema to be reparsed the next time it is needed.  This has the
    ** effect of erasing all imposter tables.
    */
    case SQLITE_TESTCTRL_IMPOSTER: {
      sqlite3 *db = va_arg(ap, sqlite3*);
      sqlite3_mutex_enter(db->mutex);
      db->init.iDb = sqlite3FindDbName(db, va_arg(ap,const char*));
      db->init.busy = db->init.imposterTable = va_arg(ap,int);
      db->init.newTnum = va_arg(ap,int);
      if( db->init.busy==0 && db->init.newTnum>0 ){
        sqlite3ResetAllSchemasOfConnection(db);
      }
      sqlite3_mutex_leave(db->mutex);
      break;
    }
  }
  va_end(ap);
#endif /* SQLITE_OMIT_BUILTIN_TEST */
  return rc;
}

/*
134487
134488
134489
134490
134491
134492
134493

134494





134495
134496
134497

134498
134499
134500
134501
134502
134503
134504

  /* Compile a SELECT statement for this cursor. For a full-table-scan, the
  ** statement loops through all rows of the %_content table. For a
  ** full-text query or docid lookup, the statement retrieves a single
  ** row by docid.
  */
  if( eSearch==FTS3_FULLSCAN_SEARCH ){

    zSql = sqlite3_mprintf(





        "SELECT %s ORDER BY rowid %s",
        p->zReadExprlist, (pCsr->bDesc ? "DESC" : "ASC")
    );

    if( zSql ){
      rc = sqlite3_prepare_v2(p->db, zSql, -1, &pCsr->pStmt, 0);
      sqlite3_free(zSql);
    }else{
      rc = SQLITE_NOMEM;
    }
  }else if( eSearch==FTS3_DOCID_SEARCH ){







>
|
>
>
>
>
>
|
|
|
>







134758
134759
134760
134761
134762
134763
134764
134765
134766
134767
134768
134769
134770
134771
134772
134773
134774
134775
134776
134777
134778
134779
134780
134781
134782

  /* Compile a SELECT statement for this cursor. For a full-table-scan, the
  ** statement loops through all rows of the %_content table. For a
  ** full-text query or docid lookup, the statement retrieves a single
  ** row by docid.
  */
  if( eSearch==FTS3_FULLSCAN_SEARCH ){
    if( pDocidGe || pDocidLe ){
      zSql = sqlite3_mprintf(
          "SELECT %s WHERE rowid BETWEEN %lld AND %lld ORDER BY rowid %s",
          p->zReadExprlist, pCsr->iMinDocid, pCsr->iMaxDocid,
          (pCsr->bDesc ? "DESC" : "ASC")
      );
    }else{
      zSql = sqlite3_mprintf("SELECT %s ORDER BY rowid %s", 
          p->zReadExprlist, (pCsr->bDesc ? "DESC" : "ASC")
      );
    }
    if( zSql ){
      rc = sqlite3_prepare_v2(p->db, zSql, -1, &pCsr->pStmt, 0);
      sqlite3_free(zSql);
    }else{
      rc = SQLITE_NOMEM;
    }
  }else if( eSearch==FTS3_DOCID_SEARCH ){
136979
136980
136981
136982
136983
136984
136985
136986
136987
136988
136989
136990
136991
136992
136993
  }

  iDocid = pExpr->iDocid;
  pIter = pPhrase->doclist.pList;
  if( iDocid!=pCsr->iPrevId || pExpr->bEof ){
    int rc = SQLITE_OK;
    int bDescDoclist = pTab->bDescIdx;      /* For DOCID_CMP macro */
    int iMul;                     /* +1 if csr dir matches index dir, else -1 */
    int bOr = 0;
    u8 bEof = 0;
    u8 bTreeEof = 0;
    Fts3Expr *p;                  /* Used to iterate from pExpr to root */
    Fts3Expr *pNear;              /* Most senior NEAR ancestor (or pExpr) */

    /* Check if this phrase descends from an OR expression node. If not, 







<







137257
137258
137259
137260
137261
137262
137263

137264
137265
137266
137267
137268
137269
137270
  }

  iDocid = pExpr->iDocid;
  pIter = pPhrase->doclist.pList;
  if( iDocid!=pCsr->iPrevId || pExpr->bEof ){
    int rc = SQLITE_OK;
    int bDescDoclist = pTab->bDescIdx;      /* For DOCID_CMP macro */

    int bOr = 0;
    u8 bEof = 0;
    u8 bTreeEof = 0;
    Fts3Expr *p;                  /* Used to iterate from pExpr to root */
    Fts3Expr *pNear;              /* Most senior NEAR ancestor (or pExpr) */

    /* Check if this phrase descends from an OR expression node. If not, 
147526
147527
147528
147529
147530
147531
147532
147533

147534



147535
147536
147537
147538
147539
147540
147541
      );
      isShiftDone = 1;

      /* Now that the shift has been done, check if the initial "..." are
      ** required. They are required if (a) this is not the first fragment,
      ** or (b) this fragment does not begin at position 0 of its column. 
      */
      if( rc==SQLITE_OK && (iPos>0 || iFragment>0) ){

        rc = fts3StringAppend(pOut, zEllipsis, -1);



      }
      if( rc!=SQLITE_OK || iCurrent<iPos ) continue;
    }

    if( iCurrent>=(iPos+nSnippet) ){
      if( isLast ){
        rc = fts3StringAppend(pOut, zEllipsis, -1);







|
>
|
>
>
>







147803
147804
147805
147806
147807
147808
147809
147810
147811
147812
147813
147814
147815
147816
147817
147818
147819
147820
147821
147822
      );
      isShiftDone = 1;

      /* Now that the shift has been done, check if the initial "..." are
      ** required. They are required if (a) this is not the first fragment,
      ** or (b) this fragment does not begin at position 0 of its column. 
      */
      if( rc==SQLITE_OK ){
        if( iPos>0 || iFragment>0 ){
          rc = fts3StringAppend(pOut, zEllipsis, -1);
        }else if( iBegin ){
          rc = fts3StringAppend(pOut, zDoc, iBegin);
        }
      }
      if( rc!=SQLITE_OK || iCurrent<iPos ) continue;
    }

    if( iCurrent>=(iPos+nSnippet) ){
      if( isLast ){
        rc = fts3StringAppend(pOut, zEllipsis, -1);

Changes to src/sqlite3.h.

103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
** string contains the date and time of the check-in (UTC) and an SHA1
** hash of the entire source tree.
**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.8.8.3"
#define SQLITE_VERSION_NUMBER 3008008
#define SQLITE_SOURCE_ID      "2015-02-25 13:29:11 9d6c1880fb75660bbabd693175579529785f8a6b"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version, sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros







|

|







103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
** string contains the date and time of the check-in (UTC) and an SHA1
** hash of the entire source tree.
**
** See also: [sqlite3_libversion()],
** [sqlite3_libversion_number()], [sqlite3_sourceid()],
** [sqlite_version()] and [sqlite_source_id()].
*/
#define SQLITE_VERSION        "3.8.8"
#define SQLITE_VERSION_NUMBER 3008008
#define SQLITE_SOURCE_ID      "2015-02-25 14:25:31 6d132e7a224ee68b5cefe9222944aac5760ffc20"

/*
** CAPI3REF: Run-Time Library Version Numbers
** KEYWORDS: sqlite3_version, sqlite3_sourceid
**
** These interfaces provide the same information as the [SQLITE_VERSION],
** [SQLITE_VERSION_NUMBER], and [SQLITE_SOURCE_ID] C preprocessor macros
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972






973
974
975
976
977
978
979
** opcode causes the xFileControl method to swap the file handle with the one
** pointed to by the pArg argument.  This capability is used during testing
** and only needs to be supported when SQLITE_TEST is defined.
**
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE               1
#define SQLITE_GET_LOCKPROXYFILE             2
#define SQLITE_SET_LOCKPROXYFILE             3
#define SQLITE_LAST_ERRNO                    4
#define SQLITE_FCNTL_SIZE_HINT               5
#define SQLITE_FCNTL_CHUNK_SIZE              6
#define SQLITE_FCNTL_FILE_POINTER            7
#define SQLITE_FCNTL_SYNC_OMITTED            8
#define SQLITE_FCNTL_WIN32_AV_RETRY          9
#define SQLITE_FCNTL_PERSIST_WAL            10
#define SQLITE_FCNTL_OVERWRITE              11
#define SQLITE_FCNTL_VFSNAME                12
#define SQLITE_FCNTL_POWERSAFE_OVERWRITE    13
#define SQLITE_FCNTL_PRAGMA                 14
#define SQLITE_FCNTL_BUSYHANDLER            15
#define SQLITE_FCNTL_TEMPFILENAME           16
#define SQLITE_FCNTL_MMAP_SIZE              18
#define SQLITE_FCNTL_TRACE                  19
#define SQLITE_FCNTL_HAS_MOVED              20
#define SQLITE_FCNTL_SYNC                   21
#define SQLITE_FCNTL_COMMIT_PHASETWO        22
#define SQLITE_FCNTL_WIN32_SET_HANDLE       23







/*
** CAPI3REF: Mutex Handle
**
** The mutex module within SQLite defines [sqlite3_mutex] to be an
** abstract type for a mutex object.  The SQLite core never looks
** at the internal representation of an [sqlite3_mutex].  It only







|
|
|


















>
>
>
>
>
>







945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
** opcode causes the xFileControl method to swap the file handle with the one
** pointed to by the pArg argument.  This capability is used during testing
** and only needs to be supported when SQLITE_TEST is defined.
**
** </ul>
*/
#define SQLITE_FCNTL_LOCKSTATE               1
#define SQLITE_FCNTL_GET_LOCKPROXYFILE       2
#define SQLITE_FCNTL_SET_LOCKPROXYFILE       3
#define SQLITE_FCNTL_LAST_ERRNO              4
#define SQLITE_FCNTL_SIZE_HINT               5
#define SQLITE_FCNTL_CHUNK_SIZE              6
#define SQLITE_FCNTL_FILE_POINTER            7
#define SQLITE_FCNTL_SYNC_OMITTED            8
#define SQLITE_FCNTL_WIN32_AV_RETRY          9
#define SQLITE_FCNTL_PERSIST_WAL            10
#define SQLITE_FCNTL_OVERWRITE              11
#define SQLITE_FCNTL_VFSNAME                12
#define SQLITE_FCNTL_POWERSAFE_OVERWRITE    13
#define SQLITE_FCNTL_PRAGMA                 14
#define SQLITE_FCNTL_BUSYHANDLER            15
#define SQLITE_FCNTL_TEMPFILENAME           16
#define SQLITE_FCNTL_MMAP_SIZE              18
#define SQLITE_FCNTL_TRACE                  19
#define SQLITE_FCNTL_HAS_MOVED              20
#define SQLITE_FCNTL_SYNC                   21
#define SQLITE_FCNTL_COMMIT_PHASETWO        22
#define SQLITE_FCNTL_WIN32_SET_HANDLE       23

/* deprecated names */
#define SQLITE_GET_LOCKPROXYFILE      SQLITE_FCNTL_GET_LOCKPROXYFILE
#define SQLITE_SET_LOCKPROXYFILE      SQLITE_FCNTL_SET_LOCKPROXYFILE
#define SQLITE_LAST_ERRNO             SQLITE_FCNTL_LAST_ERRNO


/*
** CAPI3REF: Mutex Handle
**
** The mutex module within SQLite defines [sqlite3_mutex] to be an
** abstract type for a mutex object.  The SQLite core never looks
** at the internal representation of an [sqlite3_mutex].  It only
2227
2228
2229
2230
2231
2232
2233




2234
2235
2236
2237
2238
2239
2240
SQLITE_API void sqlite3_free_table(char **result);

/*
** CAPI3REF: Formatted String Printing Functions
**
** These routines are work-alikes of the "printf()" family of functions
** from the standard C library.




**
** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their
** results into memory obtained from [sqlite3_malloc()].
** The strings returned by these two routines should be
** released by [sqlite3_free()].  ^Both routines return a
** NULL pointer if [sqlite3_malloc()] is unable to allocate enough
** memory to hold the resulting string.







>
>
>
>







2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
SQLITE_API void sqlite3_free_table(char **result);

/*
** CAPI3REF: Formatted String Printing Functions
**
** These routines are work-alikes of the "printf()" family of functions
** from the standard C library.
** These routines understand most of the common K&R formatting options,
** plus some additional non-standard formats, detailed below.
** Note that some of the more obscure formatting options from recent
** C-library standards are omitted from this implementation.
**
** ^The sqlite3_mprintf() and sqlite3_vmprintf() routines write their
** results into memory obtained from [sqlite3_malloc()].
** The strings returned by these two routines should be
** released by [sqlite3_free()].  ^Both routines return a
** NULL pointer if [sqlite3_malloc()] is unable to allocate enough
** memory to hold the resulting string.
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
** written will be n-1 characters.
**
** ^The sqlite3_vsnprintf() routine is a varargs version of sqlite3_snprintf().
**
** These routines all implement some additional formatting
** options that are useful for constructing SQL statements.
** All of the usual printf() formatting options apply.  In addition, there
** is are "%q", "%Q", and "%z" options.
**
** ^(The %q option works like %s in that it substitutes a nul-terminated
** string from the argument list.  But %q also doubles every '\'' character.
** %q is designed for use inside a string literal.)^  By doubling each '\''
** character it escapes that character and allows it to be inserted into
** the string.
**







|







2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
** written will be n-1 characters.
**
** ^The sqlite3_vsnprintf() routine is a varargs version of sqlite3_snprintf().
**
** These routines all implement some additional formatting
** options that are useful for constructing SQL statements.
** All of the usual printf() formatting options apply.  In addition, there
** is are "%q", "%Q", "%w" and "%z" options.
**
** ^(The %q option works like %s in that it substitutes a nul-terminated
** string from the argument list.  But %q also doubles every '\'' character.
** %q is designed for use inside a string literal.)^  By doubling each '\''
** character it escapes that character and allows it to be inserted into
** the string.
**
2311
2312
2313
2314
2315
2316
2317






2318
2319
2320
2321
2322
2323
2324
**  char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES(%Q)", zText);
**  sqlite3_exec(db, zSQL, 0, 0, 0);
**  sqlite3_free(zSQL);
** </pre></blockquote>
**
** The code above will render a correct SQL statement in the zSQL
** variable even if the zText variable is a NULL pointer.






**
** ^(The "%z" formatting option works like "%s" but with the
** addition that after the string has been read and copied into
** the result, [sqlite3_free()] is called on the input string.)^
*/
SQLITE_API char *sqlite3_mprintf(const char*,...);
SQLITE_API char *sqlite3_vmprintf(const char*, va_list);







>
>
>
>
>
>







2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
**  char *zSQL = sqlite3_mprintf("INSERT INTO table VALUES(%Q)", zText);
**  sqlite3_exec(db, zSQL, 0, 0, 0);
**  sqlite3_free(zSQL);
** </pre></blockquote>
**
** The code above will render a correct SQL statement in the zSQL
** variable even if the zText variable is a NULL pointer.
**
** ^(The "%w" formatting option is like "%q" except that it expects to
** be contained within double-quotes instead of single quotes, and it
** escapes the double-quote character instead of the single-quote
** character.)^  The "%w" formatting option is intended for safely inserting
** table and column names into a constructed SQL statement.
**
** ^(The "%z" formatting option works like "%s" but with the
** addition that after the string has been read and copied into
** the result, [sqlite3_free()] is called on the input string.)^
*/
SQLITE_API char *sqlite3_mprintf(const char*,...);
SQLITE_API char *sqlite3_vmprintf(const char*, va_list);
5061
5062
5063
5064
5065
5066
5067





5068
5069
5070
5071
5072
5073
5074
**
** ^(This routine returns [SQLITE_OK] if shared cache was enabled or disabled
** successfully.  An [error code] is returned otherwise.)^
**
** ^Shared cache is disabled by default. But this might change in
** future releases of SQLite.  Applications that care about shared
** cache setting should set it explicitly.





**
** This interface is threadsafe on processors where writing a
** 32-bit integer is atomic.
**
** See Also:  [SQLite Shared-Cache Mode]
*/
SQLITE_API int sqlite3_enable_shared_cache(int);







>
>
>
>
>







5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
**
** ^(This routine returns [SQLITE_OK] if shared cache was enabled or disabled
** successfully.  An [error code] is returned otherwise.)^
**
** ^Shared cache is disabled by default. But this might change in
** future releases of SQLite.  Applications that care about shared
** cache setting should set it explicitly.
**
** Note: This method is disabled on MacOS X 10.7 and iOS version 5.0
** and will always return SQLITE_MISUSE. On those systems, 
** shared cache mode should be enabled per-database connection via 
** [sqlite3_open_v2()] with [SQLITE_OPEN_SHAREDCACHE].
**
** This interface is threadsafe on processors where writing a
** 32-bit integer is atomic.
**
** See Also:  [SQLite Shared-Cache Mode]
*/
SQLITE_API int sqlite3_enable_shared_cache(int);
6261
6262
6263
6264
6265
6266
6267

6268
6269
6270
6271
6272
6273
6274
6275
#define SQLITE_TESTCTRL_LOCALTIME_FAULT         18
#define SQLITE_TESTCTRL_EXPLAIN_STMT            19  /* NOT USED */
#define SQLITE_TESTCTRL_NEVER_CORRUPT           20
#define SQLITE_TESTCTRL_VDBE_COVERAGE           21
#define SQLITE_TESTCTRL_BYTEORDER               22
#define SQLITE_TESTCTRL_ISINIT                  23
#define SQLITE_TESTCTRL_SORTER_MMAP             24

#define SQLITE_TESTCTRL_LAST                    24

/*
** CAPI3REF: SQLite Runtime Status
**
** ^This interface is used to retrieve runtime status information
** about the performance of SQLite, and optionally to reset various
** highwater marks.  ^The first argument is an integer code for







>
|







6282
6283
6284
6285
6286
6287
6288
6289
6290
6291
6292
6293
6294
6295
6296
6297
#define SQLITE_TESTCTRL_LOCALTIME_FAULT         18
#define SQLITE_TESTCTRL_EXPLAIN_STMT            19  /* NOT USED */
#define SQLITE_TESTCTRL_NEVER_CORRUPT           20
#define SQLITE_TESTCTRL_VDBE_COVERAGE           21
#define SQLITE_TESTCTRL_BYTEORDER               22
#define SQLITE_TESTCTRL_ISINIT                  23
#define SQLITE_TESTCTRL_SORTER_MMAP             24
#define SQLITE_TESTCTRL_IMPOSTER                25
#define SQLITE_TESTCTRL_LAST                    25

/*
** CAPI3REF: SQLite Runtime Status
**
** ^This interface is used to retrieve runtime status information
** about the performance of SQLite, and optionally to reset various
** highwater marks.  ^The first argument is an integer code for

Changes to src/stat.c.

222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
      }else{
        b = 1;
      }
      a = t/fsize;
      fossil_print("%*s%d:%d\n", colWidth, "compression-ratio:", a, b);
    }
    n = db_int(0, "SELECT COUNT(*) FROM event e WHERE e.type='ci'");
    fossil_print("%*s%d\n", colWidth, "checkins:", n);
    n = db_int(0, "SELECT count(*) FROM filename /*scan*/");
    fossil_print("%*s%d across all branches\n", colWidth, "files:", n);
    n = db_int(0, "SELECT count(*) FROM tag  /*scan*/"
                  " WHERE tagname GLOB 'wiki-*'");
    m = db_int(0, "SELECT COUNT(*) FROM event WHERE type='w'");
    fossil_print("%*s%d (%d changes)\n", colWidth, "wiki-pages:", n, m);
    n = db_int(0, "SELECT count(*) FROM tag  /*scan*/"







|







222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
      }else{
        b = 1;
      }
      a = t/fsize;
      fossil_print("%*s%d:%d\n", colWidth, "compression-ratio:", a, b);
    }
    n = db_int(0, "SELECT COUNT(*) FROM event e WHERE e.type='ci'");
    fossil_print("%*s%d\n", colWidth, "check-ins:", n);
    n = db_int(0, "SELECT count(*) FROM filename /*scan*/");
    fossil_print("%*s%d across all branches\n", colWidth, "files:", n);
    n = db_int(0, "SELECT count(*) FROM tag  /*scan*/"
                  " WHERE tagname GLOB 'wiki-*'");
    m = db_int(0, "SELECT COUNT(*) FROM event WHERE type='w'");
    fossil_print("%*s%d (%d changes)\n", colWidth, "wiki-pages:", n, m);
    n = db_int(0, "SELECT count(*) FROM tag  /*scan*/"

Changes to src/statrep.c.

125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
** on the 'type' flag. See stats_report_init_view().
** The returned bytes are static.
*/
static const char *stats_report_label_for_type(){
  assert( statsReportType && "Must call stats_report_init_view() first." );
  switch( statsReportType ){
    case 'c':
      return "checkins";
    case 'e':
      return "technotes";
    case 'w':
      return "wiki changes";
    case 't':
      return "ticket changes";
    case 'g':







|







125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
** on the 'type' flag. See stats_report_init_view().
** The returned bytes are static.
*/
static const char *stats_report_label_for_type(){
  assert( statsReportType && "Must call stats_report_init_view() first." );
  switch( statsReportType ){
    case 'c':
      return "check-ins";
    case 'e':
      return "technotes";
    case 'w':
      return "wiki changes";
    case 't':
      return "ticket changes";
    case 'g':
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
  cgi_printf("<span>Types:</span> ");
  if('*' == statsReportType){
    cgi_printf(" <strong>all</strong>", zTop);
  }else{
    cgi_printf(" <a href='%s'>all</a>", zTop);
  }
  if('c' == statsReportType){
    cgi_printf(" <strong>checkins</strong>", zTop);
  }else{
    cgi_printf(" <a href='%s&type=ci'>checkins</a>", zTop);
  }
  if('e' == statsReportType){
    cgi_printf(" <strong>technotes</strong>", zTop);
  }else{
    cgi_printf(" <a href='%s&type=e'>technotes</a>", zTop);
  }
  if( 't' == statsReportType ){







|

|







164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
  cgi_printf("<span>Types:</span> ");
  if('*' == statsReportType){
    cgi_printf(" <strong>all</strong>", zTop);
  }else{
    cgi_printf(" <a href='%s'>all</a>", zTop);
  }
  if('c' == statsReportType){
    cgi_printf(" <strong>check-ins</strong>", zTop);
  }else{
    cgi_printf(" <a href='%s&type=ci'>check-ins</a>", zTop);
  }
  if('e' == statsReportType){
    cgi_printf(" <strong>technotes</strong>", zTop);
  }else{
    cgi_printf(" <a href='%s&type=e'>technotes</a>", zTop);
  }
  if( 't' == statsReportType ){
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
    "   WHERE filename.fnid=mlink.fnid"
    "   GROUP BY 1"
  );
  db_prepare(&query,
    "SELECT filename, cnt FROM statrep ORDER BY cnt DESC, filename /*sort*/"
  );
  mxEvent = db_int(1, "SELECT max(cnt) FROM statrep");
  @ <h1>Checkins Per File</h1>
  @ <table class='statistics-report-table-events' border='0'
  @ cellpadding='2' cellspacing='0' id='statsTable'>
  @ <thead><tr>
  @ <th>File</th>
  @ <th>Checkins</th>
  @ <th width='90%%'><!-- relative commits graph --></th>
  @ </tr></thead><tbody>
  while( SQLITE_ROW == db_step(&query) ){
    const char *zFile = db_column_text(&query, 0);
    const int n = db_column_int(&query, 1);
    int sz;
    if( n<=0 ) continue;







|




|







472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
    "   WHERE filename.fnid=mlink.fnid"
    "   GROUP BY 1"
  );
  db_prepare(&query,
    "SELECT filename, cnt FROM statrep ORDER BY cnt DESC, filename /*sort*/"
  );
  mxEvent = db_int(1, "SELECT max(cnt) FROM statrep");
  @ <h1>Check-ins Per File</h1>
  @ <table class='statistics-report-table-events' border='0'
  @ cellpadding='2' cellspacing='0' id='statsTable'>
  @ <thead><tr>
  @ <th>File</th>
  @ <th>Check-ins</th>
  @ <th width='90%%'><!-- relative commits graph --></th>
  @ </tr></thead><tbody>
  while( SQLITE_ROW == db_step(&query) ){
    const char *zFile = db_column_text(&query, 0);
    const int n = db_column_int(&query, 1);
    int sz;
    if( n<=0 ) continue;
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
** Shows activity reports for the repository.
**
** Query Parameters:
**
**   view=REPORT_NAME  Valid values: bymonth, byyear, byuser
**   user=NAME         Restricts statistics to the given user
**   type=TYPE         Restricts the report to a specific event type:
**                     ci (checkin), w (wiki), t (ticket), g (tag)
**                     Defaulting to all event types.
**
** The view-specific query parameters include:
**
** view=byweek:
**
**   y=YYYY            The year to report (default is the server's







|







710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
** Shows activity reports for the repository.
**
** Query Parameters:
**
**   view=REPORT_NAME  Valid values: bymonth, byyear, byuser
**   user=NAME         Restricts statistics to the given user
**   type=TYPE         Restricts the report to a specific event type:
**                     ci (check-in), w (wiki), t (ticket), g (tag)
**                     Defaulting to all event types.
**
** The view-specific query parameters include:
**
** view=byweek:
**
**   y=YYYY            The year to report (default is the server's

Changes to src/style.c.

404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
  ** Maintenance note: this function must of course be available
  ** before it is called. It "should" go in the HEAD so that client
  ** HEAD code can make use of it, but because the client can replace
  ** the HEAD, and some fossil pages rely on gebi(), we put it here.
  */
  @ <script>
  @ function gebi(x){
  @ if(/^#/.test(x)) x = x.substr(1);
  @ var e = document.getElementById(x);
  @ if(!e) throw new Error("Expecting element with ID "+x);
  @ else return e;}
  @ </script>
}

#if INTERFACE
/* Allowed parameters for style_adunit() */
#define ADUNIT_OFF        0x0001       /* Do not allow ads on this page */







|

|







404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
  ** Maintenance note: this function must of course be available
  ** before it is called. It "should" go in the HEAD so that client
  ** HEAD code can make use of it, but because the client can replace
  ** the HEAD, and some fossil pages rely on gebi(), we put it here.
  */
  @ <script>
  @ function gebi(x){
  @ if(x.substr(0,1)=='#') x = x.substr(1);
  @ var e = document.getElementById(x);
  @ if(!e) throw new Error('Expecting element with ID '+x);
  @ else return e;}
  @ </script>
}

#if INTERFACE
/* Allowed parameters for style_adunit() */
#define ADUNIT_OFF        0x0001       /* Do not allow ads on this page */
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
          cgi_tag_query_parameter(zQPN);
        }
        switch( aSubmenuCtrl[i].eType ){
          case FF_ENTRY: {
            cgi_printf(
               "<span class='submenuctrl'>"
               "&nbsp;%h<input type='text' name='%s' size='%d' maxlength='%d'"
               "value='%h'%s></span>\n",
               aSubmenuCtrl[i].zLabel,
               zQPN,
               aSubmenuCtrl[i].iSize, aSubmenuCtrl[i].iSize,
               PD(zQPN,""),
               zDisabled
            );
            break;







|







502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
          cgi_tag_query_parameter(zQPN);
        }
        switch( aSubmenuCtrl[i].eType ){
          case FF_ENTRY: {
            cgi_printf(
               "<span class='submenuctrl'>"
               "&nbsp;%h<input type='text' name='%s' size='%d' maxlength='%d'"
               " value='%h'%s></span>\n",
               aSubmenuCtrl[i].zLabel,
               zQPN,
               aSubmenuCtrl[i].iSize, aSubmenuCtrl[i].iSize,
               PD(zQPN,""),
               zDisabled
            );
            break;
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
    @   padding: 10px;
    @   margin: 0px;
    @   white-space: nowrap;
  },
  { "ul.browser li.file",
    "List element in the 'flat-view' file browser for a file",
    "  background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP"
    "\\/\\/\\/yEhIf\\/\\/\\/wAAACH5BAEHAAIALAAAAAAQABAAAAIvlIKpxqcfm" 
    "gOUvoaqDSCxrEEfF14GqFXImJZsu73wepJzVMNxrtNTj3NATMKhpwAAOw==);\n"
    "  background-repeat: no-repeat;\n"
    "  background-position: 0px center;\n"
    "  padding-left: 20px;\n"
    "  padding-top: 2px;\n"
  },
  { "ul.browser li.dir",







|







821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
    @   padding: 10px;
    @   margin: 0px;
    @   white-space: nowrap;
  },
  { "ul.browser li.file",
    "List element in the 'flat-view' file browser for a file",
    "  background-image: url(data:image/gif;base64,R0lGODlhEAAQAJEAAP"
    "\\/\\/\\/yEhIf\\/\\/\\/wAAACH5BAEHAAIALAAAAAAQABAAAAIvlIKpxqcfm"
    "gOUvoaqDSCxrEEfF14GqFXImJZsu73wepJzVMNxrtNTj3NATMKhpwAAOw==);\n"
    "  background-repeat: no-repeat;\n"
    "  background-position: 0px center;\n"
    "  padding-left: 20px;\n"
    "  padding-top: 2px;\n"
  },
  { "ul.browser li.dir",
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
  { "td.rpteditex",
    "format for example table cells on the report edit page",
    @   border-width: thin;
    @   border-color: #000000;
    @   border-style: solid;
  },
  { "input.checkinUserColor",
    "format for user color input on checkin edit page",
    @ /* no special definitions, class defined, to enable color pickers, f.e.:
    @ **  add the color picker found at http:jscolor.com  as java script include
    @ **  to the header and configure the java script file with
    @ **   1. use as bindClass :checkinUserColor
    @ **   2. change the default hash adding behaviour to ON
    @ ** or change the class defition of element identified by id="clrcust"
    @ ** to a standard jscolor definition with java script in the footer. */







|







1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
  { "td.rpteditex",
    "format for example table cells on the report edit page",
    @   border-width: thin;
    @   border-color: #000000;
    @   border-style: solid;
  },
  { "input.checkinUserColor",
    "format for user color input on check-in edit page",
    @ /* no special definitions, class defined, to enable color pickers, f.e.:
    @ **  add the color picker found at http:jscolor.com  as java script include
    @ **  to the header and configure the java script file with
    @ **   1. use as bindClass :checkinUserColor
    @ **   2. change the default hash adding behaviour to ON
    @ ** or change the class defition of element identified by id="clrcust"
    @ ** to a standard jscolor definition with java script in the footer. */
1106
1107
1108
1109
1110
1111
1112




1113
1114
1115
1116
1117
1118
1119
    @   color: red;
  },
  { "ul.filelist",
    "List of files in a timeline",
    @   margin-top: 3px;
    @   line-height: 100%;
  },




  { "table.sbsdiffcols",
    "side-by-side diff display (column-based)",
    @   width: 90%;
    @   border-spacing: 0;
    @   font-size: xx-small;
  },
  { "table.sbsdiffcols td",







>
>
>
>







1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
    @   color: red;
  },
  { "ul.filelist",
    "List of files in a timeline",
    @   margin-top: 3px;
    @   line-height: 100%;
  },
  { "ul.filelist li",
    "List of files in a timeline",
    @   padding-top: 1px;
  },
  { "table.sbsdiffcols",
    "side-by-side diff display (column-based)",
    @   width: 90%;
    @   border-spacing: 0;
    @   font-size: xx-small;
  },
  { "table.sbsdiffcols td",

Changes to src/tag.c.

58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
     "  FROM plink LEFT JOIN tagxref ON cid=rid AND tagid=%d"
     " WHERE pid=:pid AND isprim",
     tagType==2, tagid
  );
  db_bind_double(&s, ":mtime", mtime);

  if( tagType==2 ){
    /* Set the propagated tag marker on checkin :rid */
    db_prepare(&ins,
       "REPLACE INTO tagxref(tagid, tagtype, srcid, origid, value, mtime, rid)"
       "VALUES(%d,2,0,%d,%Q,:mtime,:rid)",
       tagid, origId, zValue
    );
    db_bind_double(&ins, ":mtime", mtime);
  }else{
    /* Remove all references to the tag from checkin :rid */
    zValue = 0;
    db_prepare(&ins,
       "DELETE FROM tagxref WHERE tagid=%d AND rid=:rid", tagid
    );
  }
  if( tagid==TAG_BGCOLOR ){
    db_prepare(&eventupdate,







|







|







58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
     "  FROM plink LEFT JOIN tagxref ON cid=rid AND tagid=%d"
     " WHERE pid=:pid AND isprim",
     tagType==2, tagid
  );
  db_bind_double(&s, ":mtime", mtime);

  if( tagType==2 ){
    /* Set the propagated tag marker on check-in :rid */
    db_prepare(&ins,
       "REPLACE INTO tagxref(tagid, tagtype, srcid, origid, value, mtime, rid)"
       "VALUES(%d,2,0,%d,%Q,:mtime,:rid)",
       tagid, origId, zValue
    );
    db_bind_double(&ins, ":mtime", mtime);
  }else{
    /* Remove all references to the tag from check-in :rid */
    zValue = 0;
    db_prepare(&ins,
       "DELETE FROM tagxref WHERE tagid=%d AND rid=:rid", tagid
    );
  }
  if( tagid==TAG_BGCOLOR ){
    db_prepare(&eventupdate,
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
**
**         Remove the tag TAGNAME from CHECK-IN, and also remove
**         the propagation of the tag to any descendants.
**
**     %fossil tag find ?--raw? ?-t|--type TYPE? ?-n|--limit #? TAGNAME
**
**         List all objects that use TAGNAME.  TYPE can be "ci" for
**         checkins or "e" for events. The limit option limits the number
**         of results to the given value.
**
**     %fossil tag list|ls ?--raw? ?CHECK-IN?
**
**         List all tags, or if CHECK-IN is supplied, list
**         all tags and their values for CHECK-IN.
**







|







350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
**
**         Remove the tag TAGNAME from CHECK-IN, and also remove
**         the propagation of the tag to any descendants.
**
**     %fossil tag find ?--raw? ?-t|--type TYPE? ?-n|--limit #? TAGNAME
**
**         List all objects that use TAGNAME.  TYPE can be "ci" for
**         check-ins or "e" for events. The limit option limits the number
**         of results to the given value.
**
**     %fossil tag list|ls ?--raw? ?CHECK-IN?
**
**         List all tags, or if CHECK-IN is supplied, list
**         all tags and their values for CHECK-IN.
**

Changes to src/tar.c.

443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
    blob_reset(&file);
  }
  tar_finish(&zip);
  blob_write_to_file(&zip, g.argv[2]);
}

/*
** Given the RID for a checkin, construct a tarball containing
** all files in that checkin
**
** If RID is for an object that is not a real manifest, then the
** resulting tarball contains a single file which is the RID
** object.
**
** If the RID object does not exist in the repository, then
** pTar is zeroed.







|
|







443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
    blob_reset(&file);
  }
  tar_finish(&zip);
  blob_write_to_file(&zip, g.argv[2]);
}

/*
** Given the RID for a check-in, construct a tarball containing
** all files in that check-in
**
** If RID is for an object that is not a real manifest, then the
** resulting tarball contains a single file which is the RID
** object.
**
** If the RID object does not exist in the repository, then
** pTar is zeroed.
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
  verify_all_options();

  if( g.argc!=4 ){
    usage("VERSION OUTPUTFILE");
  }
  rid = name_to_typed_rid(g.argv[2], "ci");
  if( rid==0 ){
    fossil_fatal("Checkin not found: %s", g.argv[2]);
    return;
  }

  if( zName==0 ){
    zName = db_text("default-name",
       "SELECT replace(%Q,' ','_') "
          " || strftime('_%%Y-%%m-%%d_%%H%%M%%S_', event.mtime) "







|







556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
  verify_all_options();

  if( g.argc!=4 ){
    usage("VERSION OUTPUTFILE");
  }
  rid = name_to_typed_rid(g.argv[2], "ci");
  if( rid==0 ){
    fossil_fatal("Check-in not found: %s", g.argv[2]);
    return;
  }

  if( zName==0 ){
    zName = db_text("default-name",
       "SELECT replace(%Q,' ','_') "
          " || strftime('_%%Y-%%m-%%d_%%H%%M%%S_', event.mtime) "
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
  blob_reset(&tarball);
}

/*
** WEBPAGE: tarball
** URL: /tarball/RID.tar.gz
**
** Generate a compressed tarball for a checkin.
** Return that tarball as the HTTP reply content.
**
** Optional URL Parameters:
**
** - name=NAME[.tar.gz] is base name of the output file. Defaults to
** something project/version-specific. The prefix of the name, up to
** the last '.', are used as the top-most directory name in the tar







|







580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
  blob_reset(&tarball);
}

/*
** WEBPAGE: tarball
** URL: /tarball/RID.tar.gz
**
** Generate a compressed tarball for a check-in.
** Return that tarball as the HTTP reply content.
**
** Optional URL Parameters:
**
** - name=NAME[.tar.gz] is base name of the output file. Defaults to
** something project/version-specific. The prefix of the name, up to
** the last '.', are used as the top-most directory name in the tar

Changes to src/th_main.c.

268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
  }else if( rid==0 ){
    Th_SetResult(interp, "name not found", -1);
  }
  return rid;
}

/*
** Attempt to lookup the specified checkin and file name into an rid.
** This function was copied from artifact_from_ci_and_filename() in
** info.c; however, it has been modified to report TH1 script errors
** instead of "fatal errors".
*/
int th1_artifact_from_ci_and_filename(
  Th_Interp *interp,
  const char *zCI,







|







268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
  }else if( rid==0 ){
    Th_SetResult(interp, "name not found", -1);
  }
  return rid;
}

/*
** Attempt to lookup the specified check-in and file name into an rid.
** This function was copied from artifact_from_ci_and_filename() in
** info.c; however, it has been modified to report TH1 script errors
** instead of "fatal errors".
*/
int th1_artifact_from_ci_and_filename(
  Th_Interp *interp,
  const char *zCI,
511
512
513
514
515
516
517

518
519
520
521
522
523
524
** "th1Hooks"        = FOSSIL_ENABLE_TH1_HOOKS
** "tcl"             = FOSSIL_ENABLE_TCL
** "useTclStubs"     = USE_TCL_STUBS
** "tclStubs"        = FOSSIL_ENABLE_TCL_STUBS
** "tclPrivateStubs" = FOSSIL_ENABLE_TCL_PRIVATE_STUBS
** "json"            = FOSSIL_ENABLE_JSON
** "markdown"        = FOSSIL_ENABLE_MARKDOWN

**
*/
static int hasfeatureCmd(
  Th_Interp *interp,
  void *p,
  int argc,
  const char **argv,







>







511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
** "th1Hooks"        = FOSSIL_ENABLE_TH1_HOOKS
** "tcl"             = FOSSIL_ENABLE_TCL
** "useTclStubs"     = USE_TCL_STUBS
** "tclStubs"        = FOSSIL_ENABLE_TCL_STUBS
** "tclPrivateStubs" = FOSSIL_ENABLE_TCL_PRIVATE_STUBS
** "json"            = FOSSIL_ENABLE_JSON
** "markdown"        = FOSSIL_ENABLE_MARKDOWN
** "unicodeCmdLine"  = !BROKEN_MINGW_CMDLINE
**
*/
static int hasfeatureCmd(
  Th_Interp *interp,
  void *p,
  int argc,
  const char **argv,
568
569
570
571
572
573
574





575
576
577
578
579
580
581
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_JSON)
  else if( 0 == fossil_strnicmp( zArg, "json\0", 5 ) ){
    rc = 1;
  }





#endif
  else if( 0 == fossil_strnicmp( zArg, "markdown\0", 9 ) ){
    rc = 1;
  }
  if( g.thTrace ){
    Th_Trace("[hasfeature %#h] => %d<br />\n", argl[1], zArg, rc);
  }







>
>
>
>
>







569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
    rc = 1;
  }
#endif
#if defined(FOSSIL_ENABLE_JSON)
  else if( 0 == fossil_strnicmp( zArg, "json\0", 5 ) ){
    rc = 1;
  }
#endif
#if !defined(BROKEN_MINGW_CMDLINE)
  else if( 0 == fossil_strnicmp( zArg, "unicodeCmdLine\0", 15 ) ){
    rc = 1;
  }
#endif
  else if( 0 == fossil_strnicmp( zArg, "markdown\0", 9 ) ){
    rc = 1;
  }
  if( g.thTrace ){
    Th_Trace("[hasfeature %#h] => %d<br />\n", argl[1], zArg, rc);
  }

Changes to src/timeline.c.

1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
**    ng             Suppress the graph if present
**    nd             Suppress "divider" lines
**    v              Show details of files changed
**    f=UUID         Show family (immediate parents and children) of UUID
**    from=UUID      Path from...
**    to=UUID          ... to this
**    shortest         ... show only the shortest path
**    uf=FUUID       Show only checkins that use given file version
**    brbg           Background color from branch name
**    ubg            Background color from user
**    namechng       Show only checkins that filename changes
**    ym=YYYY-MM     Shown only events for the given year/month.
**    datefmt=N      Override the date format
**
** p= and d= can appear individually or together.  If either p= or d=
** appear, then u=, y=, a=, and b= are ignored.
**
** If both a= and b= appear then both upper and lower bounds are honored.







|


|







1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
**    ng             Suppress the graph if present
**    nd             Suppress "divider" lines
**    v              Show details of files changed
**    f=UUID         Show family (immediate parents and children) of UUID
**    from=UUID      Path from...
**    to=UUID          ... to this
**    shortest         ... show only the shortest path
**    uf=FUUID       Show only check-ins that use given file version
**    brbg           Background color from branch name
**    ubg            Background color from user
**    namechng       Show only check-ins that filename changes
**    ym=YYYY-MM     Shown only events for the given year/month.
**    datefmt=N      Override the date format
**
** p= and d= can appear individually or together.  If either p= or d=
** appear, then u=, y=, a=, and b= are ignored.
**
** If both a= and b= appear then both upper and lower bounds are honored.
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
  const char *zAfter = P("a");       /* Events after this time */
  const char *zBefore = P("b");      /* Events before this time */
  const char *zCirca = P("c");       /* Events near this time */
  const char *zMark = P("m");        /* Mark this event or an event this time */
  const char *zTagName = P("t");     /* Show events with this tag */
  const char *zBrName = P("r");      /* Show events related to this tag */
  const char *zSearch = P("s");      /* Search string */
  const char *zUses = P("uf");       /* Only show checkins hold this file */
  const char *zYearMonth = P("ym");  /* Show checkins for the given YYYY-MM */
  const char *zYearWeek = P("yw");   /* Checkins for YYYY-WW (week-of-year) */
  int useDividers = P("nd")==0;      /* Show dividers if "nd" is missing */
  int renameOnly = P("namechng")!=0; /* Show only checkins that rename files */
  int tagid;                         /* Tag ID */
  int tmFlags = 0;                   /* Timeline flags */
  const char *zThisTag = 0;          /* Suppress links to this tag */
  const char *zThisUser = 0;         /* Suppress links to this user */
  HQuery url;                        /* URL for various branch links */
  int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */
  int to_rid = name_to_typed_rid(P("to"),"ci");    /* to= for path timelines */







|
|
|

|







1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
  const char *zAfter = P("a");       /* Events after this time */
  const char *zBefore = P("b");      /* Events before this time */
  const char *zCirca = P("c");       /* Events near this time */
  const char *zMark = P("m");        /* Mark this event or an event this time */
  const char *zTagName = P("t");     /* Show events with this tag */
  const char *zBrName = P("r");      /* Show events related to this tag */
  const char *zSearch = P("s");      /* Search string */
  const char *zUses = P("uf");       /* Only show check-ins hold this file */
  const char *zYearMonth = P("ym");  /* Show check-ins for the given YYYY-MM */
  const char *zYearWeek = P("yw");   /* Check-ins for YYYY-WW (week-of-year) */
  int useDividers = P("nd")==0;      /* Show dividers if "nd" is missing */
  int renameOnly = P("namechng")!=0; /* Show only check-ins that rename files */
  int tagid;                         /* Tag ID */
  int tmFlags = 0;                   /* Timeline flags */
  const char *zThisTag = 0;          /* Suppress links to this tag */
  const char *zThisUser = 0;         /* Suppress links to this user */
  HQuery url;                        /* URL for various branch links */
  int from_rid = name_to_typed_rid(P("from"),"ci"); /* from= for paths */
  int to_rid = name_to_typed_rid(P("to"),"ci");    /* to= for path timelines */
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
    }
    if( tagid>0 ){
      blob_append_sql(&sql,
        " AND (EXISTS(SELECT 1 FROM tagxref"
            " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", tagid);

      if( zBrName ){
        /* The next two blob_appendf() calls add SQL that causes checkins that
        ** are not part of the branch which are parents or children of the
        ** branch to be included in the report.  This related check-ins are
        ** useful in helping to visualize what has happened on a quiescent
        ** branch that is infrequently merged with a much more activate branch.
        */
        blob_append_sql(&sql,
          " OR EXISTS(SELECT 1 FROM plink CROSS JOIN tagxref ON rid=cid"







|







1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
    }
    if( tagid>0 ){
      blob_append_sql(&sql,
        " AND (EXISTS(SELECT 1 FROM tagxref"
            " WHERE tagid=%d AND tagtype>0 AND rid=blob.rid)\n", tagid);

      if( zBrName ){
        /* The next two blob_appendf() calls add SQL that causes check-ins that
        ** are not part of the branch which are parents or children of the
        ** branch to be included in the report.  This related check-ins are
        ** useful in helping to visualize what has happened on a quiescent
        ** branch that is infrequently merged with a much more activate branch.
        */
        blob_append_sql(&sql,
          " OR EXISTS(SELECT 1 FROM plink CROSS JOIN tagxref ON rid=cid"
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
          cSep = ',';
        }
        blob_append_sql(&sql, ")");
      }
    }else{ /* zType!="all" */
      blob_append_sql(&sql, " AND event.type=%Q", zType);
      if( zType[0]=='c' ){
        zEType = "checkin";
      }else if( zType[0]=='w' ){
        zEType = "wiki edit";
      }else if( zType[0]=='t' ){
        zEType = "ticket change";
      }else if( zType[0]=='e' ){
        zEType = "technical note";
      }else if( zType[0]=='g' ){







|







1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
          cSep = ',';
        }
        blob_append_sql(&sql, ")");
      }
    }else{ /* zType!="all" */
      blob_append_sql(&sql, " AND event.type=%Q", zType);
      if( zType[0]=='c' ){
        zEType = "check-in";
      }else if( zType[0]=='w' ){
        zEType = "wiki edit";
      }else if( zType[0]=='t' ){
        zEType = "ticket change";
      }else if( zType[0]=='e' ){
        zEType = "technical note";
      }else if( zType[0]=='g' ){
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
**   -t|--type TYPE       Output items from the given types only, such as:
**                            ci = file commits only
**                            e  = technical notes only
**                            t  = tickets only
**                            w  = wiki commits only
**   -v|--verbose         Output the list of files changed by each commit
**                        and the type of each change (edited, deleted,
**                        etc.) after the checkin comment.
**   -W|--width <num>     Width of lines (default is to auto-detect). Must be
**                        >20 or 0 (= no limit, resulting in a single line per
**                        entry).
**   -R REPO_FILE         Specifies the repository db to use. Default is
**                        the current checkout's repository.
*/
void timeline_cmd(void){







|







1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
**   -t|--type TYPE       Output items from the given types only, such as:
**                            ci = file commits only
**                            e  = technical notes only
**                            t  = tickets only
**                            w  = wiki commits only
**   -v|--verbose         Output the list of files changed by each commit
**                        and the type of each change (edited, deleted,
**                        etc.) after the check-in comment.
**   -W|--width <num>     Width of lines (default is to auto-detect). Must be
**                        >20 or 0 (= no limit, resulting in a single line per
**                        entry).
**   -R REPO_FILE         Specifies the repository db to use. Default is
**                        the current checkout's repository.
*/
void timeline_cmd(void){
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
    }
    zDate = mprintf("(SELECT julianday(%Q%s, 'utc'))", zOrigin, zShift);
  }

  if( zFilePattern ){
    if( zType==0 ){
      /* When zFilePattern is specified and type is not specified, only show
       * file checkins */
      zType="ci";
    }
    file_tree_name(zFilePattern, &treeName, 1);
    if( fossil_strcmp(blob_str(&treeName), ".")==0 ){
      /* When zTreeName refers to g.zLocalRoot, it's like not specifying
       * zFilePattern. */
      zFilePattern = 0;







|







1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
    }
    zDate = mprintf("(SELECT julianday(%Q%s, 'utc'))", zOrigin, zShift);
  }

  if( zFilePattern ){
    if( zType==0 ){
      /* When zFilePattern is specified and type is not specified, only show
       * file check-ins */
      zType="ci";
    }
    file_tree_name(zFilePattern, &treeName, 1);
    if( fossil_strcmp(blob_str(&treeName), ".")==0 ){
      /* When zTreeName refers to g.zLocalRoot, it's like not specifying
       * zFilePattern. */
      zFilePattern = 0;
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046


/*
** COMMAND: test-timewarp-list
**
** Usage: %fossil test-timewarp-list ?-v|---verbose?
**
** Display all instances of child checkins that appear earlier in time
** than their parent.  If the -v|--verbose option is provided, both the
** parent and child checking and their times are shown.
*/
void test_timewarp_cmd(void){
  Stmt q;
  int verboseFlag;

  db_find_and_open_repository(0, 0);
  verboseFlag = find_option("verbose", "v", 0)!=0;







|

|







2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046


/*
** COMMAND: test-timewarp-list
**
** Usage: %fossil test-timewarp-list ?-v|---verbose?
**
** Display all instances of child check-ins that appear earlier in time
** than their parent.  If the -v|--verbose option is provided, both the
** parent and child check-ins and their times are shown.
*/
void test_timewarp_cmd(void){
  Stmt q;
  int verboseFlag;

  db_find_and_open_repository(0, 0);
  verboseFlag = find_option("verbose", "v", 0)!=0;

Changes to src/tkt.c.

858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
       "%s/tkttimeline?name=%T", g.zTop, zUuid);
  }
  style_submenu_element("History", "History",
    "%s/tkthistory/%s", g.zTop, zUuid);
  style_submenu_element("Status", "Status",
    "%s/info/%s", g.zTop, zUuid);
  if( zType[0]=='c' ){
    zTitle = mprintf("Check-Ins Associated With Ticket %h", zUuid);
  }else{
    zTitle = mprintf("Timeline Of Ticket %h", zUuid);
  }
  style_header("%z", zTitle);

  sqlite3_snprintf(6, zGlobPattern, "%s", zUuid);
  canonical16(zGlobPattern, strlen(zGlobPattern));







|







858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
       "%s/tkttimeline?name=%T", g.zTop, zUuid);
  }
  style_submenu_element("History", "History",
    "%s/tkthistory/%s", g.zTop, zUuid);
  style_submenu_element("Status", "Status",
    "%s/info/%s", g.zTop, zUuid);
  if( zType[0]=='c' ){
    zTitle = mprintf("Check-ins Associated With Ticket %h", zUuid);
  }else{
    zTitle = mprintf("Timeline Of Ticket %h", zUuid);
  }
  style_header("%z", zTitle);

  sqlite3_snprintf(6, zGlobPattern, "%s", zUuid);
  canonical16(zGlobPattern, strlen(zGlobPattern));

Changes to src/update.c.

639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
      blob_reset(&path);
    }
  }
}


/*
** Get the contents of a file within the checking "revision".  If
** revision==NULL then get the file content for the current checkout.
*/
int historical_version_of_file(
  const char *revision,    /* The checkin containing the file */
  const char *file,        /* Full treename of the file */
  Blob *content,           /* Put the content here */
  int *pIsLink,            /* Set to true if file is link. */
  int *pIsExe,             /* Set to true if file is executable */
  int *pIsBin,             /* Set to true if file is binary */
  int errCode              /* Error code if file not found.  Panic if 0. */
){
  Manifest *pManifest;
  ManifestFile *pFile;
  int rid=0;

  if( revision ){
    rid = name_to_typed_rid(revision,"ci");
  }else if( !g.localOpen ){
    rid = name_to_typed_rid(db_get("main-branch","trunk"),"ci");
  }else{
    rid = db_lget_int("checkout", 0);
  }
  if( !is_a_version(rid) ){
    if( errCode>0 ) return errCode;
    fossil_fatal("no such checkin: %s", revision);
  }
  pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0);

  if( pManifest ){
    pFile = manifest_file_find(pManifest, file);
    if( pFile ){
      int rc;
      rid = uuid_to_rid(pFile->zUuid, 0);
      if( pIsExe ) *pIsExe = ( manifest_file_mperm(pFile)==PERM_EXE );
      if( pIsLink ) *pIsLink = ( manifest_file_mperm(pFile)==PERM_LNK );
      manifest_destroy(pManifest);
      rc = content_get(rid, content);
      if( rc && pIsBin ){
        *pIsBin = looks_like_binary(content);
      }
      return rc;
    }
    manifest_destroy(pManifest);
    if( errCode<=0 ){
      fossil_fatal("file %s does not exist in checkin: %s", file, revision);
    }
  }else if( errCode<=0 ){
    if( revision==0 ){
      revision = db_text("current", "SELECT uuid FROM blob WHERE rid=%d", rid);
    }
    fossil_fatal("could not parse manifest for checkin: %s", revision);
  }
  return errCode;
}


/*
** COMMAND: revert







|



|




















|



















|





|







639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
      blob_reset(&path);
    }
  }
}


/*
** Get the contents of a file within the check-in "revision".  If
** revision==NULL then get the file content for the current checkout.
*/
int historical_version_of_file(
  const char *revision,    /* The check-in containing the file */
  const char *file,        /* Full treename of the file */
  Blob *content,           /* Put the content here */
  int *pIsLink,            /* Set to true if file is link. */
  int *pIsExe,             /* Set to true if file is executable */
  int *pIsBin,             /* Set to true if file is binary */
  int errCode              /* Error code if file not found.  Panic if 0. */
){
  Manifest *pManifest;
  ManifestFile *pFile;
  int rid=0;

  if( revision ){
    rid = name_to_typed_rid(revision,"ci");
  }else if( !g.localOpen ){
    rid = name_to_typed_rid(db_get("main-branch","trunk"),"ci");
  }else{
    rid = db_lget_int("checkout", 0);
  }
  if( !is_a_version(rid) ){
    if( errCode>0 ) return errCode;
    fossil_fatal("no such check-in: %s", revision);
  }
  pManifest = manifest_get(rid, CFTYPE_MANIFEST, 0);

  if( pManifest ){
    pFile = manifest_file_find(pManifest, file);
    if( pFile ){
      int rc;
      rid = uuid_to_rid(pFile->zUuid, 0);
      if( pIsExe ) *pIsExe = ( manifest_file_mperm(pFile)==PERM_EXE );
      if( pIsLink ) *pIsLink = ( manifest_file_mperm(pFile)==PERM_LNK );
      manifest_destroy(pManifest);
      rc = content_get(rid, content);
      if( rc && pIsBin ){
        *pIsBin = looks_like_binary(content);
      }
      return rc;
    }
    manifest_destroy(pManifest);
    if( errCode<=0 ){
      fossil_fatal("file %s does not exist in check-in: %s", file, revision);
    }
  }else if( errCode<=0 ){
    if( revision==0 ){
      revision = db_text("current", "SELECT uuid FROM blob WHERE rid=%d", rid);
    }
    fossil_fatal("could not parse manifest for check-in: %s", revision);
  }
  return errCode;
}


/*
** COMMAND: revert

Changes to src/winhttp.c.

645
646
647
648
649
650
651




652
653
654
655
656
657
658
**
**         --localauth
**
**              Enables automatic login if the --localauth option is present
**              and the "localauth" setting is off and the connection is from
**              localhost.
**




**         --scgi
**
**              Create an SCGI server instead of an HTTP server
**
**
**    fossil winsrv delete ?SERVICE-NAME?
**







>
>
>
>







645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
**
**         --localauth
**
**              Enables automatic login if the --localauth option is present
**              and the "localauth" setting is off and the connection is from
**              localhost.
**
**         --repolist
**
**              If REPOSITORY is directory, URL "/" lists all repositories.
**
**         --scgi
**
**              Create an SCGI server instead of an HTTP server
**
**
**    fossil winsrv delete ?SERVICE-NAME?
**
702
703
704
705
706
707
708

709
710
711
712
713
714
715
    const char *zPassword   = find_option("password", "W", 1);
    const char *zPort       = find_option("port", "P", 1);
    const char *zNotFound   = find_option("notfound", 0, 1);
    const char *zFileGlob   = find_option("files", 0, 1);
    const char *zLocalAuth  = find_option("localauth", 0, 0);
    const char *zRepository = find_repository_option();
    int useSCGI             = find_option("scgi", 0, 0)!=0;

    Blob binPath;

    verify_all_options();
    if( g.argc==4 ){
      zSvcName = g.argv[3];
    }else if( g.argc>4 ){
      fossil_fatal("too many arguments for create method.");







>







706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
    const char *zPassword   = find_option("password", "W", 1);
    const char *zPort       = find_option("port", "P", 1);
    const char *zNotFound   = find_option("notfound", 0, 1);
    const char *zFileGlob   = find_option("files", 0, 1);
    const char *zLocalAuth  = find_option("localauth", 0, 0);
    const char *zRepository = find_repository_option();
    int useSCGI             = find_option("scgi", 0, 0)!=0;
    int allowRepoList       = find_option("repolist",0,0)!=0;
    Blob binPath;

    verify_all_options();
    if( g.argc==4 ){
      zSvcName = g.argv[3];
    }else if( g.argc>4 ){
      fossil_fatal("too many arguments for create method.");
748
749
750
751
752
753
754

755
756
757
758
759
760
761
    }
    db_close(0);
    /* Build the fully-qualified path to the service binary file. */
    blob_zero(&binPath);
    blob_appendf(&binPath, "\"%s\" server", g.nameOfExe);
    if( zPort ) blob_appendf(&binPath, " --port %s", zPort);
    if( useSCGI ) blob_appendf(&binPath, " --scgi");

    if( zNotFound ) blob_appendf(&binPath, " --notfound \"%s\"", zNotFound);
    if( zFileGlob ) blob_appendf(&binPath, " --files-urlenc %T", zFileGlob);
    if( zLocalAuth ) blob_append(&binPath, " --localauth", -1);
    blob_appendf(&binPath, " \"%s\"", g.zRepositoryName);
    /* Create the service. */
    hScm = OpenSCManagerW(NULL, NULL, SC_MANAGER_ALL_ACCESS);
    if( !hScm ) winhttp_fatal("create", zSvcName, win32_get_last_errmsg());







>







753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
    }
    db_close(0);
    /* Build the fully-qualified path to the service binary file. */
    blob_zero(&binPath);
    blob_appendf(&binPath, "\"%s\" server", g.nameOfExe);
    if( zPort ) blob_appendf(&binPath, " --port %s", zPort);
    if( useSCGI ) blob_appendf(&binPath, " --scgi");
    if( allowRepoList ) blob_appendf(&binPath, " --repolist");
    if( zNotFound ) blob_appendf(&binPath, " --notfound \"%s\"", zNotFound);
    if( zFileGlob ) blob_appendf(&binPath, " --files-urlenc %T", zFileGlob);
    if( zLocalAuth ) blob_append(&binPath, " --localauth", -1);
    blob_appendf(&binPath, " \"%s\"", g.zRepositoryName);
    /* Create the service. */
    hScm = OpenSCManagerW(NULL, NULL, SC_MANAGER_ALL_ACCESS);
    if( !hScm ) winhttp_fatal("create", zSvcName, win32_get_last_errmsg());

Changes to win/Makefile.mingw.

507
508
509
510
511
512
513



514
515
516
517
518
519
520
  $(SRCDIR)/../skins/khaki/header.txt \
  $(SRCDIR)/../skins/plain_gray/css.txt \
  $(SRCDIR)/../skins/plain_gray/footer.txt \
  $(SRCDIR)/../skins/plain_gray/header.txt \
  $(SRCDIR)/../skins/rounded1/css.txt \
  $(SRCDIR)/../skins/rounded1/footer.txt \
  $(SRCDIR)/../skins/rounded1/header.txt \



  $(SRCDIR)/diff.tcl \
  $(SRCDIR)/markdown.md

TRANS_SRC = \
  $(OBJDIR)/add_.c \
  $(OBJDIR)/allrepo_.c \
  $(OBJDIR)/attach_.c \







>
>
>







507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
  $(SRCDIR)/../skins/khaki/header.txt \
  $(SRCDIR)/../skins/plain_gray/css.txt \
  $(SRCDIR)/../skins/plain_gray/footer.txt \
  $(SRCDIR)/../skins/plain_gray/header.txt \
  $(SRCDIR)/../skins/rounded1/css.txt \
  $(SRCDIR)/../skins/rounded1/footer.txt \
  $(SRCDIR)/../skins/rounded1/header.txt \
  $(SRCDIR)/../skins/xekri/css.txt \
  $(SRCDIR)/../skins/xekri/footer.txt \
  $(SRCDIR)/../skins/xekri/header.txt \
  $(SRCDIR)/diff.tcl \
  $(SRCDIR)/markdown.md

TRANS_SRC = \
  $(OBJDIR)/add_.c \
  $(OBJDIR)/allrepo_.c \
  $(OBJDIR)/attach_.c \
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926

APPTARGETS += $(LIBTARGETS)

ifdef FOSSIL_BUILD_SSL
APPTARGETS += openssl
endif

$(APPNAME):	$(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(APPTARGETS)
	$(CODECHECK1) $(TRANS_SRC)
	$(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/fossil.o

# This rule prevents make from using its default rules to try build
# an executable named "manifest" out of the file named "manifest.c"
#
$(SRCDIR)/../manifest:







|







915
916
917
918
919
920
921
922
923
924
925
926
927
928
929

APPTARGETS += $(LIBTARGETS)

ifdef FOSSIL_BUILD_SSL
APPTARGETS += openssl
endif

$(APPNAME):	$(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o
	$(CODECHECK1) $(TRANS_SRC)
	$(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/fossil.o

# This rule prevents make from using its default rules to try build
# an executable named "manifest" out of the file named "manifest.c"
#
$(SRCDIR)/../manifest:

Changes to win/Makefile.mingw.mistachkin.

507
508
509
510
511
512
513



514
515
516
517
518
519
520
  $(SRCDIR)/../skins/khaki/header.txt \
  $(SRCDIR)/../skins/plain_gray/css.txt \
  $(SRCDIR)/../skins/plain_gray/footer.txt \
  $(SRCDIR)/../skins/plain_gray/header.txt \
  $(SRCDIR)/../skins/rounded1/css.txt \
  $(SRCDIR)/../skins/rounded1/footer.txt \
  $(SRCDIR)/../skins/rounded1/header.txt \



  $(SRCDIR)/diff.tcl \
  $(SRCDIR)/markdown.md

TRANS_SRC = \
  $(OBJDIR)/add_.c \
  $(OBJDIR)/allrepo_.c \
  $(OBJDIR)/attach_.c \







>
>
>







507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
  $(SRCDIR)/../skins/khaki/header.txt \
  $(SRCDIR)/../skins/plain_gray/css.txt \
  $(SRCDIR)/../skins/plain_gray/footer.txt \
  $(SRCDIR)/../skins/plain_gray/header.txt \
  $(SRCDIR)/../skins/rounded1/css.txt \
  $(SRCDIR)/../skins/rounded1/footer.txt \
  $(SRCDIR)/../skins/rounded1/header.txt \
  $(SRCDIR)/../skins/xekri/css.txt \
  $(SRCDIR)/../skins/xekri/footer.txt \
  $(SRCDIR)/../skins/xekri/header.txt \
  $(SRCDIR)/diff.tcl \
  $(SRCDIR)/markdown.md

TRANS_SRC = \
  $(OBJDIR)/add_.c \
  $(OBJDIR)/allrepo_.c \
  $(OBJDIR)/attach_.c \
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926

APPTARGETS += $(LIBTARGETS)

ifdef FOSSIL_BUILD_SSL
APPTARGETS += openssl
endif

$(APPNAME):	$(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o $(APPTARGETS)
	$(CODECHECK1) $(TRANS_SRC)
	$(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/fossil.o

# This rule prevents make from using its default rules to try build
# an executable named "manifest" out of the file named "manifest.c"
#
$(SRCDIR)/../manifest:







|







915
916
917
918
919
920
921
922
923
924
925
926
927
928
929

APPTARGETS += $(LIBTARGETS)

ifdef FOSSIL_BUILD_SSL
APPTARGETS += openssl
endif

$(APPNAME):	$(APPTARGETS) $(OBJDIR)/headers $(CODECHECK1) $(OBJ) $(EXTRAOBJ) $(OBJDIR)/fossil.o
	$(CODECHECK1) $(TRANS_SRC)
	$(TCC) -o $@ $(OBJ) $(EXTRAOBJ) $(LIB) $(OBJDIR)/fossil.o

# This rule prevents make from using its default rules to try build
# an executable named "manifest" out of the file named "manifest.c"
#
$(SRCDIR)/../manifest:

Changes to win/Makefile.msc.

347
348
349
350
351
352
353



354
355
356
357
358
359
360
        $(SRCDIR)\../skins/khaki/header.txt \
        $(SRCDIR)\../skins/plain_gray/css.txt \
        $(SRCDIR)\../skins/plain_gray/footer.txt \
        $(SRCDIR)\../skins/plain_gray/header.txt \
        $(SRCDIR)\../skins/rounded1/css.txt \
        $(SRCDIR)\../skins/rounded1/footer.txt \
        $(SRCDIR)\../skins/rounded1/header.txt \



        $(SRCDIR)\diff.tcl \
        $(SRCDIR)\markdown.md

OBJ   = $(OX)\add$O \
        $(OX)\allrepo$O \
        $(OX)\attach$O \
        $(OX)\bag$O \







>
>
>







347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
        $(SRCDIR)\../skins/khaki/header.txt \
        $(SRCDIR)\../skins/plain_gray/css.txt \
        $(SRCDIR)\../skins/plain_gray/footer.txt \
        $(SRCDIR)\../skins/plain_gray/header.txt \
        $(SRCDIR)\../skins/rounded1/css.txt \
        $(SRCDIR)\../skins/rounded1/footer.txt \
        $(SRCDIR)\../skins/rounded1/header.txt \
        $(SRCDIR)\../skins/xekri/css.txt \
        $(SRCDIR)\../skins/xekri/footer.txt \
        $(SRCDIR)\../skins/xekri/header.txt \
        $(SRCDIR)\diff.tcl \
        $(SRCDIR)\markdown.md

OBJ   = $(OX)\add$O \
        $(OX)\allrepo$O \
        $(OX)\attach$O \
        $(OX)\bag$O \
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
page_index.h: mkindex$E $(SRC)
	$** > $@

builtin_data.h:	mkbuiltin$E $(EXTRA_FILES)
	mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@

clean:
	-del $(OX)\*.obj
	-del *.obj
	-del *_.c
	-del *.h
	-del *.ilk
	-del *.map
	-del *.res
	-del headers
	-del linkopts
	-del vc*.pdb

realclean: clean
	-del $(APPNAME)
	-del $(PDBNAME)
	-del translate$E
	-del translate$P
	-del mkindex$E
	-del mkindex$P
	-del makeheaders$E
	-del makeheaders$P
	-del mkversion$E
	-del mkversion$P
	-del codecheck1$E
	-del codecheck1$P
	-del mkbuiltin$E
	-del mkbuiltin$P

$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h







|
|
|
|
|
|
|
|
|
|


|
|
|
|
|
|
|
|
|
|
|
|
|
|







708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
page_index.h: mkindex$E $(SRC)
	$** > $@

builtin_data.h:	mkbuiltin$E $(EXTRA_FILES)
	mkbuiltin$E --prefix $(SRCDIR)/ $(EXTRA_FILES) > $@

clean:
	del $(OX)\*.obj 2>NUL
	del *.obj 2>NUL
	del *_.c 2>NUL
	del *.h 2>NUL
	del *.ilk 2>NUL
	del *.map 2>NUL
	del *.res 2>NUL
	del headers 2>NUL
	del linkopts 2>NUL
	del vc*.pdb 2>NUL

realclean: clean
	del $(APPNAME) 2>NUL
	del $(PDBNAME) 2>NUL
	del translate$E 2>NUL
	del translate$P 2>NUL
	del mkindex$E 2>NUL
	del mkindex$P 2>NUL
	del makeheaders$E 2>NUL
	del makeheaders$P 2>NUL
	del mkversion$E 2>NUL
	del mkversion$P 2>NUL
	del codecheck1$E 2>NUL
	del codecheck1$P 2>NUL
	del mkbuiltin$E 2>NUL
	del mkbuiltin$P 2>NUL

$(OBJDIR)\json$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_artifact$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_branch$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_config$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_diff$O : $(SRCDIR)\json_detail.h
$(OBJDIR)\json_dir$O : $(SRCDIR)\json_detail.h

Changes to win/fossil.rc.

91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
  FILESUBTYPE VFT2_UNKNOWN
BEGIN
  BLOCK "StringFileInfo"
  BEGIN
    BLOCK "040904b0"
    BEGIN
      VALUE "CompanyName", "Fossil Development Team\0"
      VALUE "FileDescription", "Simple, high-reliability, distributed software configuration management system.\0"
      VALUE "ProductName", "Fossil\0"
      VALUE "ProductVersion", "Fossil " RELEASE_VERSION " " MANIFEST_VERSION " " MANIFEST_DATE " UTC\0"
      VALUE "FileVersion", "Fossil " RELEASE_VERSION " " MANIFEST_VERSION " " MANIFEST_DATE " UTC\0"
      VALUE "InternalName", "fossil\0"
      VALUE "LegalCopyright", "Copyright © " MANIFEST_YEAR " by D. Richard Hipp.  All rights reserved.\0"
      VALUE "OriginalFilename", "fossil.exe\0"
      VALUE "CompilerName", COMPILER_NAME "\0"







|







91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
  FILESUBTYPE VFT2_UNKNOWN
BEGIN
  BLOCK "StringFileInfo"
  BEGIN
    BLOCK "040904b0"
    BEGIN
      VALUE "CompanyName", "Fossil Development Team\0"
      VALUE "FileDescription", "Fossil is a simple, high-reliability, distributed software configuration management system.\0"
      VALUE "ProductName", "Fossil\0"
      VALUE "ProductVersion", "Fossil " RELEASE_VERSION " " MANIFEST_VERSION " " MANIFEST_DATE " UTC\0"
      VALUE "FileVersion", "Fossil " RELEASE_VERSION " " MANIFEST_VERSION " " MANIFEST_DATE " UTC\0"
      VALUE "InternalName", "fossil\0"
      VALUE "LegalCopyright", "Copyright © " MANIFEST_YEAR " by D. Richard Hipp.  All rights reserved.\0"
      VALUE "OriginalFilename", "fossil.exe\0"
      VALUE "CompilerName", COMPILER_NAME "\0"

Changes to www/adding_code.wiki.

148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
you like (as long as it does not collide with some other symbol in the
Fossil code) but it is traditional to name the function
"<i>commandname</i><b>_cmd</b>", as is done in the example.

You could also use "printf()" instead of "fossil_print()" to generate
the output text, if desired.  But "fossil_print()" is recommended as
it has extra logic to insert \r characters at the right times on
windows systems.

Once you have the command running, you can then start adding code to
make it do useful things.  There are lots of utility functions in
Fossil for parsing command-line options and for
opening and accessing and manipulating the repository and
the working check-out.  Study implementations of existing commands
to get an idea of how things are done.  You can easily find the implementations







|







148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
you like (as long as it does not collide with some other symbol in the
Fossil code) but it is traditional to name the function
"<i>commandname</i><b>_cmd</b>", as is done in the example.

You could also use "printf()" instead of "fossil_print()" to generate
the output text, if desired.  But "fossil_print()" is recommended as
it has extra logic to insert \r characters at the right times on
Windows systems.

Once you have the command running, you can then start adding code to
make it do useful things.  There are lots of utility functions in
Fossil for parsing command-line options and for
opening and accessing and manipulating the repository and
the working check-out.  Study implementations of existing commands
to get an idea of how things are done.  You can easily find the implementations

Changes to www/antibot.wiki.

57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<li> Mozilla/5.0 (Windows NT 6.1; rv:19.0) Gecko/20100101 Firefox/19.0
<li> Mozilla/4.0 (compatible; MSIE 8.0; Windows_NT 5.1; Trident/4.0)
<li> Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
<li> Wget/1.12 (openbsd4.9)
</ul>

The first two UserAgent strings above identify Firefox 19 and
Internet Explorer 8.0, both running on windows NT.  The third
example is the spider used by Google to index the internet.
The fourth example is the "wget" utility running on OpenBSD.
Thus the first two UserAgent strings above identify the requestor
as human whereas the second two identify the requestor as a spider.
Note that the UserAgent string is completely under the control
of the requestor and so a malicious spider can forge a UserAgent
string that makes it look like a human.  But most spiders truly







|







57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<li> Mozilla/5.0 (Windows NT 6.1; rv:19.0) Gecko/20100101 Firefox/19.0
<li> Mozilla/4.0 (compatible; MSIE 8.0; Windows_NT 5.1; Trident/4.0)
<li> Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
<li> Wget/1.12 (openbsd4.9)
</ul>

The first two UserAgent strings above identify Firefox 19 and
Internet Explorer 8.0, both running on Windows NT.  The third
example is the spider used by Google to index the internet.
The fourth example is the "wget" utility running on OpenBSD.
Thus the first two UserAgent strings above identify the requestor
as human whereas the second two identify the requestor as a spider.
Note that the UserAgent string is completely under the control
of the requestor and so a malicious spider can forge a UserAgent
string that makes it look like a human.  But most spiders truly

Changes to www/branching.wiki.

185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
is a descendant through direct children.  Tag propagation does not
cross merges.  Tag propagation also stops as soon
as it encounters another check-in with the same tag.  A cancellation tag
is attached to a single check-in in order to either override a one-time
tag that was previously placed on that same check-in, or to block
tag propagation from an ancestor.

The initial checkin of every repository has two propagating tags.  In
figure 5, that initial check-in is check-in 1.  The <b>branch</b> tag
tells (by its value)  what branch the check-in is a member of.
The default branch is called "trunk."  All tags that begin with "<b>sym-</b>"
are symbolic name tags.  When a symbolic name tag is attached to a
check-in, that allows you to refer to that check-in by its symbolic
name rather than by its 40-character SHA1 hash name.  When a symbolic name
tag propagates (as does the <b>sym-trunk</b> tag) then referring to that







|







185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
is a descendant through direct children.  Tag propagation does not
cross merges.  Tag propagation also stops as soon
as it encounters another check-in with the same tag.  A cancellation tag
is attached to a single check-in in order to either override a one-time
tag that was previously placed on that same check-in, or to block
tag propagation from an ancestor.

The initial check-in of every repository has two propagating tags.  In
figure 5, that initial check-in is check-in 1.  The <b>branch</b> tag
tells (by its value)  what branch the check-in is a member of.
The default branch is called "trunk."  All tags that begin with "<b>sym-</b>"
are symbolic name tags.  When a symbolic name tag is attached to a
check-in, that allows you to refer to that check-in by its symbolic
name rather than by its 40-character SHA1 hash name.  When a symbolic name
tag propagates (as does the <b>sym-trunk</b> tag) then referring to that

Changes to www/build.wiki.

31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
containing a snapshot of the <em>latest</em> version directly from 
Fossil's own fossil repository. Additionally, source archives of 
<em>released</em> versions of
fossil are available from the <a href="http://www.fossil-scm.org/download.html">downloads page</a>.
To obtain a development version of fossil, follow these steps:</p>

<ol>
<li><p>Point your web browser at
<a href="http://www.fossil-scm.org/">
http://www.fossil-scm.org/</a>.</p></li>

<li><p>Click on the 
<a href="http://www.fossil-scm.org/fossil/timeline">Timeline</a> 
link at the top of the page.</p></li>

<li><p>Select a version of of Fossil you want to download.  The latest
version on the trunk branch is usually a good choice.  Click on its
link.</p></li>

<li><p>Finally, click on one of the
"Zip Archive" or "Tarball" links, according to your preference.
These link will build a ZIP archive or a gzip-compressed tarball of the 
complete source code and download it to your browser.
</ol>

<h2>Aside: Is it really safe to use an unreleased development version of
the Fossil source code?</h2>

Yes!  Any check-in on the 
[/timeline?t=trunk | trunk branch] of the Fossil
[http://fossil-scm.org/fossil/timeline | Fossil self-hosting repository]
will work fine.  (Dodgy code is always on a branch.)  In the unlikely
event that you pick a version with a serious bug, it still won't
clobber your files.  Fossil uses several
[./selfcheck.wiki | self-checks] prior to committing any
repository change that prevent loss-of-work due to bugs.

The Fossil [./selfhost.wiki | self-hosting repositories], especially
the one at [http://www.fossil-scm.org/fossil], usually run a version
of trunk that is less than a week or two old.  Look at the bottom
right-hand corner of this screen (to the right of "This page was
generated in...") to see exactly which version of Fossil is
rendering this page.  It is always safe to use whatever version
of the Fossil code you find running on the main Fossil website.

<h2>2.0 Compiling</h2>

<ol>
<li value="5">
<p>Unpack the ZIP or tarball you downloaded then
<b>cd</b> into the directory created.</p></li>

<li><i>(Optional, unix only)</i>
Run <b>./configure</b> to construct a makefile.

<ol type="a">
<li><p>
If you do not have the OpenSSL library installed on your system, then
add <b>--with-openssl=none</b> to omit the https functionality.








|














|

















|











|







31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
containing a snapshot of the <em>latest</em> version directly from 
Fossil's own fossil repository. Additionally, source archives of 
<em>released</em> versions of
fossil are available from the <a href="http://www.fossil-scm.org/download.html">downloads page</a>.
To obtain a development version of fossil, follow these steps:</p>

<ol>
<li><p>Point your web browser to
<a href="http://www.fossil-scm.org/">
http://www.fossil-scm.org/</a>.</p></li>

<li><p>Click on the 
<a href="http://www.fossil-scm.org/fossil/timeline">Timeline</a> 
link at the top of the page.</p></li>

<li><p>Select a version of of Fossil you want to download.  The latest
version on the trunk branch is usually a good choice.  Click on its
link.</p></li>

<li><p>Finally, click on one of the
"Zip Archive" or "Tarball" links, according to your preference.
These link will build a ZIP archive or a gzip-compressed tarball of the 
complete source code and download it to your computer.
</ol>

<h2>Aside: Is it really safe to use an unreleased development version of
the Fossil source code?</h2>

Yes!  Any check-in on the 
[/timeline?t=trunk | trunk branch] of the Fossil
[http://fossil-scm.org/fossil/timeline | Fossil self-hosting repository]
will work fine.  (Dodgy code is always on a branch.)  In the unlikely
event that you pick a version with a serious bug, it still won't
clobber your files.  Fossil uses several
[./selfcheck.wiki | self-checks] prior to committing any
repository change that prevent loss-of-work due to bugs.

The Fossil [./selfhost.wiki | self-hosting repositories], especially
the one at [http://www.fossil-scm.org/fossil], usually run a version
of trunk that is less than a week or two old.  Look at the bottom
left-hand corner of this screen (to the right of "This page was
generated in...") to see exactly which version of Fossil is
rendering this page.  It is always safe to use whatever version
of the Fossil code you find running on the main Fossil website.

<h2>2.0 Compiling</h2>

<ol>
<li value="5">
<p>Unpack the ZIP or tarball you downloaded then
<b>cd</b> into the directory created.</p></li>

<li><i>(Optional, Unix only)</i>
Run <b>./configure</b> to construct a makefile.

<ol type="a">
<li><p>
If you do not have the OpenSSL library installed on your system, then
add <b>--with-openssl=none</b> to omit the https functionality.

98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116



117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
</ol>

<li><p>Run "<b>make</b>" to build the "fossil" or "fossil.exe" executable.
The details depend on your platform and compiler.

<ol type="a">
<li><p><i>Unix</i> → the configure-generated Makefile should work on
all unix and unix-like systems.  Simply type "<b>make</b>".

<li><p><i>Unix without running "configure"</i> → if you prefer to avoid running configure, you
can also use: <b>make -f Makefile.classic</b>.  You may want to make minor
edits to Makefile.classic to configure the build for your system.

<li><p><i>MinGW/MinGW-w64</i> → Use the mingw makefile:
"<b>make -f win/Makefile.mingw</b>". On a Windows box you will
need either Cygwin or Msys as build environment. On Cygwin, Linux
or Darwin you may want to make minor edits to win/Makefile.mingw
to configure the cross-compile environment.




<li><p><i>MSVC</i> → Use the MSVC makefile.  First
change to the "win/" subdirectory ("<b>cd win</b>") then run
"<b>nmake /f Makefile.msc</b>".<br><br>Alternatively, the batch
file "<b>win\buildmsvc.bat</b>" may be used and it will attempt to
detect and use the latest installed version of MSVC.<br><br>To enable
the optional <a href="https://www.openssl.org/">OpenSSL</a> support,
first <a href="https://www.openssl.org/source/">download the official
source code for OpenSSL</a> and extract it to an appropriately named
"<b>openssl-X.Y.ZA</b>" subdirectory within the local
[/tree?ci=trunk&name=compat | compat] directory (e.g.
"<b>compat/openssl-1.0.2</b>"), then make sure that some recent
<a href="http://www.perl.org/">Perl</a> binaries are installed locally,
and finally run one of the following commands:
<blockquote><pre>
nmake /f Makefile.msc FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin
</pre></blockquote>
<blockquote><pre>
buildmsvc.bat FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin
</pre></blockquote>

<li><p><i>Cygwin</i> → The same as other unix-like systems. It is
recommended to configure using: "<b>configure --disable-internal-sqlite</b>",
making sure you have the "libsqlite3-devel" , "zlib-devel" and
"openssl-devel" packages installed first.
</ol>
</ol>

<h2>3.0 Installing</h2>

<ol>
<li value="8">
<p>The finished binary is named "fossil" (or "fossil.exe" on windows).  
Put this binary in a 
directory that is somewhere on your PATH environment variable.
It does not matter where.</p>

<li>
<p><b>(Optional:)</b>
To uninstall, just delete the binary.</p>







|





|





>
>
>




















|










|







98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
</ol>

<li><p>Run "<b>make</b>" to build the "fossil" or "fossil.exe" executable.
The details depend on your platform and compiler.

<ol type="a">
<li><p><i>Unix</i> → the configure-generated Makefile should work on
all Unix and Unix-like systems.  Simply type "<b>make</b>".

<li><p><i>Unix without running "configure"</i> → if you prefer to avoid running configure, you
can also use: <b>make -f Makefile.classic</b>.  You may want to make minor
edits to Makefile.classic to configure the build for your system.

<li><p><i>MinGW3.x (not 4.0)/MinGW-w64</i> → Use the mingw makefile:
"<b>make -f win/Makefile.mingw</b>". On a Windows box you will
need either Cygwin or Msys as build environment. On Cygwin, Linux
or Darwin you may want to make minor edits to win/Makefile.mingw
to configure the cross-compile environment.

Hint: don't use MinGW-4.0, it will compile but fossil won't work correctly, see
<a href="https://www.fossil-scm.org/index.html/tktview/18cff45a4e210430e24c">https://www.fossil-scm.org/index.html/tktview/18cff45a4e210430e24c</a>.

<li><p><i>MSVC</i> → Use the MSVC makefile.  First
change to the "win/" subdirectory ("<b>cd win</b>") then run
"<b>nmake /f Makefile.msc</b>".<br><br>Alternatively, the batch
file "<b>win\buildmsvc.bat</b>" may be used and it will attempt to
detect and use the latest installed version of MSVC.<br><br>To enable
the optional <a href="https://www.openssl.org/">OpenSSL</a> support,
first <a href="https://www.openssl.org/source/">download the official
source code for OpenSSL</a> and extract it to an appropriately named
"<b>openssl-X.Y.ZA</b>" subdirectory within the local
[/tree?ci=trunk&name=compat | compat] directory (e.g.
"<b>compat/openssl-1.0.2</b>"), then make sure that some recent
<a href="http://www.perl.org/">Perl</a> binaries are installed locally,
and finally run one of the following commands:
<blockquote><pre>
nmake /f Makefile.msc FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin
</pre></blockquote>
<blockquote><pre>
buildmsvc.bat FOSSIL_ENABLE_SSL=1 FOSSIL_BUILD_SSL=1 PERLDIR=C:\full\path\to\Perl\bin
</pre></blockquote>

<li><p><i>Cygwin</i> → The same as other Unix-like systems. It is
recommended to configure using: "<b>configure --disable-internal-sqlite</b>",
making sure you have the "libsqlite3-devel" , "zlib-devel" and
"openssl-devel" packages installed first.
</ol>
</ol>

<h2>3.0 Installing</h2>

<ol>
<li value="8">
<p>The finished binary is named "fossil" (or "fossil.exe" on Windows).  
Put this binary in a 
directory that is somewhere on your PATH environment variable.
It does not matter where.</p>

<li>
<p><b>(Optional:)</b>
To uninstall, just delete the binary.</p>

Changes to www/changes.wiki.

93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
  *  Prompt the user to optionally fix invalid UTF-8 at check-in.
  *  Added a line-number toggle option to the [/help?cmd=/info|/info]
     and [/help?cmd=/artifact|/artifact] pages.
  *  Most commands now issue errors rather than silently ignoring unrecognized
     command-line options.
  *  Use full 40-character SHA1 hashes (instead of abbreviations) in most
     internal URLs.
  *  The "ssh:" sync method on windows now uses "plink.exe" instead of "ssh" as
     the secure-shell client program.
  *  Prevent a partial clone when the connection is lost.
  *  Make the distinction between 301 and 302 redirects.
  *  Allow commits against a closed check-in as long as the commit goes onto
     a different branch.
  *  Improved cache control in the web interface reduces unnecessary requests
     for common resources like the page logo and CSS.







|







93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
  *  Prompt the user to optionally fix invalid UTF-8 at check-in.
  *  Added a line-number toggle option to the [/help?cmd=/info|/info]
     and [/help?cmd=/artifact|/artifact] pages.
  *  Most commands now issue errors rather than silently ignoring unrecognized
     command-line options.
  *  Use full 40-character SHA1 hashes (instead of abbreviations) in most
     internal URLs.
  *  The "ssh:" sync method on Windows now uses "plink.exe" instead of "ssh" as
     the secure-shell client program.
  *  Prevent a partial clone when the connection is lost.
  *  Make the distinction between 301 and 302 redirects.
  *  Allow commits against a closed check-in as long as the commit goes onto
     a different branch.
  *  Improved cache control in the web interface reduces unnecessary requests
     for common resources like the page logo and CSS.
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
  *  New --integrate option to [/help?cmd=merge | fossil merge], which
     automatically closes the merged branch when committing.
  *  Renamed <tt>/stats_report</tt> page to [/reports]. Graph width is now
     relative, not absolute.
  *  Added <tt>yw=YYYY-WW</tt> (year-week) filter to timeline to limit the results
     to a specific year and calendar week number, e.g. [/timeline?yw=2013-01].
  *  Updates to SQLite to prevent opening a repository file using file descriptors
     1 or 2 on unix.  This fixes a bug under which an assertion failure could
     overwrite part of a repository database file, corrupting it.
  *  Added support for unlimited line lengths in side-by-side diffs.
  *  New --close option to [/help?cmd=commit | fossil commit], which
     immediately closes the branch being committed.
  *  Added <tt>chart</tt> option to [/help?cmd=bisect | fossil bisect].
  *  Improvements to the "human or bot?" determination.
  *  Reports errors about missing CGI-standard environment variables for HTTP







|







244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
  *  New --integrate option to [/help?cmd=merge | fossil merge], which
     automatically closes the merged branch when committing.
  *  Renamed <tt>/stats_report</tt> page to [/reports]. Graph width is now
     relative, not absolute.
  *  Added <tt>yw=YYYY-WW</tt> (year-week) filter to timeline to limit the results
     to a specific year and calendar week number, e.g. [/timeline?yw=2013-01].
  *  Updates to SQLite to prevent opening a repository file using file descriptors
     1 or 2 on Unix.  This fixes a bug under which an assertion failure could
     overwrite part of a repository database file, corrupting it.
  *  Added support for unlimited line lengths in side-by-side diffs.
  *  New --close option to [/help?cmd=commit | fossil commit], which
     immediately closes the branch being committed.
  *  Added <tt>chart</tt> option to [/help?cmd=bisect | fossil bisect].
  *  Improvements to the "human or bot?" determination.
  *  Reports errors about missing CGI-standard environment variables for HTTP
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
<h2>Changes For Version 1.26 (2013-06-18)</h2>
  *  The argument to the --port option for the [/help?cmd=ui | fossil ui] and
     [/help?cmd=server | fossil server] commands can take an IP address in addition
     to the port number, causing Fossil to bind to just that one IP address.
  *  After prompting for a password, also ask if that password should be
     remembered.
  *  Performance improvements to the diff engine.
  *  Fix the side-by-side diff engine to work better with multi-byte unicode text.
  *  Color-coding in the web-based annotation (blame) display.  Fix the annotation
     engine so that it is no longer confused by time-warps.
  *  The markdown formatter is now available by default and can be used for
     tickets, wiki, and embedded documentation.
  *  Add subcommands "fossil bisect log" and "fossil bisect status" to the
     [/help?cmd=bisect | fossil bisect] command, as well as other bisect enhancements.
  *  Enhanced defenses that prevent spiders from using excessive CPU and bandwidth.







|







267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
<h2>Changes For Version 1.26 (2013-06-18)</h2>
  *  The argument to the --port option for the [/help?cmd=ui | fossil ui] and
     [/help?cmd=server | fossil server] commands can take an IP address in addition
     to the port number, causing Fossil to bind to just that one IP address.
  *  After prompting for a password, also ask if that password should be
     remembered.
  *  Performance improvements to the diff engine.
  *  Fix the side-by-side diff engine to work better with multi-byte Unicode text.
  *  Color-coding in the web-based annotation (blame) display.  Fix the annotation
     engine so that it is no longer confused by time-warps.
  *  The markdown formatter is now available by default and can be used for
     tickets, wiki, and embedded documentation.
  *  Add subcommands "fossil bisect log" and "fossil bisect status" to the
     [/help?cmd=bisect | fossil bisect] command, as well as other bisect enhancements.
  *  Enhanced defenses that prevent spiders from using excessive CPU and bandwidth.
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
     is used.
  *  Add javascript so that clicking on column headers in a ticket report
     sorts by the indicated column.
  *  Add the "fossil cat" command which is basically an alias for
     "fossil finfo -p".
  *  Hyperlinks with the class "button" are rendered as submenu buttons
     on embedded documentation.
  *  The check-in comment editor on windows now defaults to NotePad.exe.
  *  Correctly deal with BOMs in check-in comments.  Also attempt to convert
     check-in comments to UTF8 from other encodings.
  *  Allow the deletion of multiple stash entries using multiple arguments
     to the "fossil stash rm" command.
  *  Enhance the "fossil server DIRECTORY" command to serve static content
     files contained in DIRECTORY.  For security, only files with a
     recognized suffix (such as *.html, *.jpg, *.txt, etc) will be delivered







|







326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
     is used.
  *  Add javascript so that clicking on column headers in a ticket report
     sorts by the indicated column.
  *  Add the "fossil cat" command which is basically an alias for
     "fossil finfo -p".
  *  Hyperlinks with the class "button" are rendered as submenu buttons
     on embedded documentation.
  *  The check-in comment editor on Windows now defaults to NotePad.exe.
  *  Correctly deal with BOMs in check-in comments.  Also attempt to convert
     check-in comments to UTF8 from other encodings.
  *  Allow the deletion of multiple stash entries using multiple arguments
     to the "fossil stash rm" command.
  *  Enhance the "fossil server DIRECTORY" command to serve static content
     files contained in DIRECTORY.  For security, only files with a
     recognized suffix (such as *.html, *.jpg, *.txt, etc) will be delivered
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
  *  Add options to "fossil commit" to override the various sanity checks.
     Options added: --allow-empty, --allow-fork, --allow-older, and
     --allow-conflict.
  *  Optionally require a CAPTCHA (controlled by a setting on the
     Admin/Access webpage) when a user who is not logged in tries to
     edit wiki, or a ticket, or an attachment.
  *  Improvements to the "ssh://" sync protocol, to help it move past
     noisey motd comments.
  *  Add the uf=FILE-SHA1-HASH query parameter to the timeline, causing the
     timeline to show only check-ins that contain the specific file identified
     by FILE-SHA1-HASH.  ("uf" stands for "uses file".)
  *  Enhance the file change annotator so that it follows the file across
     name changes.
  *  Fix the server-side of the sync protocol so that it will not generate
     a delta loop when a file changes from its original state, through two
     or more intermediate states, and back to the original state, all within
     a single sync.
  *  Show much less output during a sync operation, unless the --verbose
     option is used.
  *  Set the action= attribute of &lt;form&gt; elements using javascript,
     as an addition defense against spam-bots.
  *  Disallow invalid UTF8 characters (such as characters in the surrogate
     pair range) in filenames.
  *  Judge the UserAgent strings issued by the NetSurf webbrowser to be
     coming from a human, not from a bot.
  *  Add the zlib sources to the Fossil source tree (under compat/zlib) and
     use those sources when compiling on (windows) systems that do not have
     a zlib library installed by default.
  *  Prompt the user with the option to convert non-UTF8 files into UTF8
     when committing.
  *  Allow the characters <nowiki>*[]?</nowiki> in filenames.
  *  Allow the --context option on diff commands to have a value of 0.
  *  Added the "dbstat" command.
  *  Enhanced "fossil merge" so that if the VERSION argument is omitted, Fossil
     tries to merge any forks of the current branch.
  *  Improved detection of forks in a commit race.
  *  Added the --analyze option to "fossil rebuild".

<h2>Changes For Version 1.24 (2012-10-22)</h2>
  *  Added support for WYSIWYG editing of wiki pages. WYSIWYG is turned off
     by default and can be turned on by setting a configuration option.
  *  Allow style= attribute to occur in HTML markup on wiki pages.
  *  Added the --tk option to the "fossi diff" and "fossil stash diff"
     commands, causing color-coded diff output to be displayed in a Tcl/Tk
     GUI window.  This option only works if Tcl/Tk is installed on the
     host.
  *  On windows, make the "gdiff" command default to use WinDiff.exe.
  *  Update the "fossil stash" command so that it always prompts for a
     comment if the -m option is omitted.
  *  Enhance the timeline webpages so that a=, b=, c=, d=, p=, and dp=
     query parameters (and others) can all accept any valid checkin name
     (such as branch names or labels) instead of just SHA1 hashes.
  *  Added the "fossil stash show" command.
  *  Added the "fileage" webpage with links to this page from the check-in
     information page and from the file browser.
  *  Added --age and -t options to the "fossil ls" command.
  *  Added the --setmtime option to "fossil update".  When used, the mtime
     of all mananged files is set to the time when the most recent version of
     the file was checked in.
  *  Changed the "vdiff" webpage to show the complete text of files that
     were added or removed (the equivelent of using the -N or --newfile
     options with the "fossil diff" command-line.)
  *  Added the --temp option to "fossil clean" and "fossil extra", causing
     those commands to only look at temporary files generated by Fossil,
     such as merge-conflict reports or aborted check-in messages.
  *  Enhance the raw page download so that it can guess the mimetype of
     attachments based on the filename.
  *  Change the behavior of the from= and to= query parameters on the







|


















|



















|



|






|


|







361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
  *  Add options to "fossil commit" to override the various sanity checks.
     Options added: --allow-empty, --allow-fork, --allow-older, and
     --allow-conflict.
  *  Optionally require a CAPTCHA (controlled by a setting on the
     Admin/Access webpage) when a user who is not logged in tries to
     edit wiki, or a ticket, or an attachment.
  *  Improvements to the "ssh://" sync protocol, to help it move past
     noisy motd comments.
  *  Add the uf=FILE-SHA1-HASH query parameter to the timeline, causing the
     timeline to show only check-ins that contain the specific file identified
     by FILE-SHA1-HASH.  ("uf" stands for "uses file".)
  *  Enhance the file change annotator so that it follows the file across
     name changes.
  *  Fix the server-side of the sync protocol so that it will not generate
     a delta loop when a file changes from its original state, through two
     or more intermediate states, and back to the original state, all within
     a single sync.
  *  Show much less output during a sync operation, unless the --verbose
     option is used.
  *  Set the action= attribute of &lt;form&gt; elements using javascript,
     as an addition defense against spam-bots.
  *  Disallow invalid UTF8 characters (such as characters in the surrogate
     pair range) in filenames.
  *  Judge the UserAgent strings issued by the NetSurf webbrowser to be
     coming from a human, not from a bot.
  *  Add the zlib sources to the Fossil source tree (under compat/zlib) and
     use those sources when compiling on (Windows) systems that do not have
     a zlib library installed by default.
  *  Prompt the user with the option to convert non-UTF8 files into UTF8
     when committing.
  *  Allow the characters <nowiki>*[]?</nowiki> in filenames.
  *  Allow the --context option on diff commands to have a value of 0.
  *  Added the "dbstat" command.
  *  Enhanced "fossil merge" so that if the VERSION argument is omitted, Fossil
     tries to merge any forks of the current branch.
  *  Improved detection of forks in a commit race.
  *  Added the --analyze option to "fossil rebuild".

<h2>Changes For Version 1.24 (2012-10-22)</h2>
  *  Added support for WYSIWYG editing of wiki pages. WYSIWYG is turned off
     by default and can be turned on by setting a configuration option.
  *  Allow style= attribute to occur in HTML markup on wiki pages.
  *  Added the --tk option to the "fossi diff" and "fossil stash diff"
     commands, causing color-coded diff output to be displayed in a Tcl/Tk
     GUI window.  This option only works if Tcl/Tk is installed on the
     host.
  *  On Windows, make the "gdiff" command default to use WinDiff.exe.
  *  Update the "fossil stash" command so that it always prompts for a
     comment if the -m option is omitted.
  *  Enhance the timeline webpages so that a=, b=, c=, d=, p=, and dp=
     query parameters (and others) can all accept any valid check-in name
     (such as branch names or labels) instead of just SHA1 hashes.
  *  Added the "fossil stash show" command.
  *  Added the "fileage" webpage with links to this page from the check-in
     information page and from the file browser.
  *  Added --age and -t options to the "fossil ls" command.
  *  Added the --setmtime option to "fossil update".  When used, the mtime
     of all managed files is set to the time when the most recent version of
     the file was checked in.
  *  Changed the "vdiff" webpage to show the complete text of files that
     were added or removed (the equivalent of using the -N or --newfile
     options with the "fossil diff" command-line.)
  *  Added the --temp option to "fossil clean" and "fossil extra", causing
     those commands to only look at temporary files generated by Fossil,
     such as merge-conflict reports or aborted check-in messages.
  *  Enhance the raw page download so that it can guess the mimetype of
     attachments based on the filename.
  *  Change the behavior of the from= and to= query parameters on the
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
     to use Tcl as part of their configuration.  This reduces the size of
     the Fossil binary and allows any version of Tcl 8.4 or later to be used.
  *  Merge the latest SQLite changes from upstream.
  *  Lots of minor bug fixes.

<h2>Changes For Version 1.23 (2012-08-08)</h2>
  *  The default checkout database name is now ".fslckout" instead of
     "_FOSSIL_" on unix.  Both names continue to work.
  *  Added the "fossil all changes" command
  *  Added the --ckout option to the "fossil all list" command
  *  Added the "public-pages" glob pattern that can be configured to allow
     anonymous users to see embedded documentation on sites where source
     code should not be accessible to anonymous users.
  *  Allow multiple --tag options on the same "fossil commit" command.
  *  Change the meaning of the --bgcolor option for "fossil commit" to only







|







464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
     to use Tcl as part of their configuration.  This reduces the size of
     the Fossil binary and allows any version of Tcl 8.4 or later to be used.
  *  Merge the latest SQLite changes from upstream.
  *  Lots of minor bug fixes.

<h2>Changes For Version 1.23 (2012-08-08)</h2>
  *  The default checkout database name is now ".fslckout" instead of
     "_FOSSIL_" on Unix.  Both names continue to work.
  *  Added the "fossil all changes" command
  *  Added the --ckout option to the "fossil all list" command
  *  Added the "public-pages" glob pattern that can be configured to allow
     anonymous users to see embedded documentation on sites where source
     code should not be accessible to anonymous users.
  *  Allow multiple --tag options on the same "fossil commit" command.
  *  Change the meaning of the --bgcolor option for "fossil commit" to only
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
  *  New command: ticket history. [98a855c508]
  *  Disabled SSLv2 in HTTPS client.[ea1d369d23]
  *  Fixed constant prompting regarding previously-saved SSL
     certificates. [636804745b]
  *  Other SSL improvements.
  *  Added -R REPOFILE support to several more CLI commands. [e080560378]
  *  Generated tarballs now have constant timestamps, so they are
     always identical for any given checkin. [e080560378]
  *  A number of minor HTML-related tweaks and fixes.
  *  Added --args FILENAME global CLI argument to import arbitrary
     CLI arguments from a file (e.g. long file lists). [e080560378]
  *  Fixed significant memory leak in annotation of files with long
     histories.[9929bab702]
  *  Added warnings when a merge operation overwrites local copies
     (UNDO is available, but previously this condition normally went







|







561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
  *  New command: ticket history. [98a855c508]
  *  Disabled SSLv2 in HTTPS client.[ea1d369d23]
  *  Fixed constant prompting regarding previously-saved SSL
     certificates. [636804745b]
  *  Other SSL improvements.
  *  Added -R REPOFILE support to several more CLI commands. [e080560378]
  *  Generated tarballs now have constant timestamps, so they are
     always identical for any given check-in. [e080560378]
  *  A number of minor HTML-related tweaks and fixes.
  *  Added --args FILENAME global CLI argument to import arbitrary
     CLI arguments from a file (e.g. long file lists). [e080560378]
  *  Fixed significant memory leak in annotation of files with long
     histories.[9929bab702]
  *  Added warnings when a merge operation overwrites local copies
     (UNDO is available, but previously this condition normally went
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
  *  Now works on MSVC with repos >2GB. [6092935ff2]
  *  A number of code cleanups to resolve warnings from various compilers.
  *  Update the built-in SQLite to version 3.7.9 beta.

<h2>Changes For Version 1.19 (2011-09-02)</h2>
  *  Added a ./configure script based on autosetup.
  *  Added the "[/help/winsrv | fossil winsrv]" command
     for creating a Fossil service on windows systems.
  *  Added "versionable settings" where settings that affect
     the local tree can be stored in versioned files in the
     .fossil-settings directory.
  *  Background colors for branches are choosen automatically if no
     color is specified by the user.
  *  The status, changes and extras commands now show
     pathnames relative to the current working directory,
     unless overridden by command line options or the
     "relative-paths" setting.<br><b>WARNING:</b> This
     change will break scripts which rely on the current
     output when the current working directory is not the
     repository root.
  *  Added "empty-dirs" versionable setting.
  *  Added support for client-side SSL certificates with "ssl-identity"
     setting and --ssl-identity option.
  *  Added "ssl-ca-location" setting to specify trusted root
     SSL certificates.
  *  Added the --case-sensitive BOOLEAN command-line option to many commands.
     Default to true for unix and false for windows.
  *  Added the "Color-Test" submenu button on the branch list web page.
  *  Compatibility improvements to the git-export feature.
  *  Performance improvements on SHA1 checksums
  *  Update to the latest SQLite version 3.7.8 alpha.
  *  Fix the tarball generator to work with very log pathnames

<h2>Changes For Version 1.18 (2011-07-14)</h2>







|



|














|







585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
  *  Now works on MSVC with repos >2GB. [6092935ff2]
  *  A number of code cleanups to resolve warnings from various compilers.
  *  Update the built-in SQLite to version 3.7.9 beta.

<h2>Changes For Version 1.19 (2011-09-02)</h2>
  *  Added a ./configure script based on autosetup.
  *  Added the "[/help/winsrv | fossil winsrv]" command
     for creating a Fossil service on Windows systems.
  *  Added "versionable settings" where settings that affect
     the local tree can be stored in versioned files in the
     .fossil-settings directory.
  *  Background colors for branches are chosen automatically if no
     color is specified by the user.
  *  The status, changes and extras commands now show
     pathnames relative to the current working directory,
     unless overridden by command line options or the
     "relative-paths" setting.<br><b>WARNING:</b> This
     change will break scripts which rely on the current
     output when the current working directory is not the
     repository root.
  *  Added "empty-dirs" versionable setting.
  *  Added support for client-side SSL certificates with "ssl-identity"
     setting and --ssl-identity option.
  *  Added "ssl-ca-location" setting to specify trusted root
     SSL certificates.
  *  Added the --case-sensitive BOOLEAN command-line option to many commands.
     Default to true for Unix and false for Windows.
  *  Added the "Color-Test" submenu button on the branch list web page.
  *  Compatibility improvements to the git-export feature.
  *  Performance improvements on SHA1 checksums
  *  Update to the latest SQLite version 3.7.8 alpha.
  *  Fix the tarball generator to work with very log pathnames

<h2>Changes For Version 1.18 (2011-07-14)</h2>

Changes to www/checkin_names.wiki.

41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
determines which version of the documentation to display.

Fossil provides a variety of ways to specify a check-in.  This
document describes the various methods.

<h2>Canonical Check-in Name</h2>

The canonical name of a checkin is the SHA1 hash of its
[./fileformat.wiki#manifest | manifest] expressed as a 40-character
lowercase hexadecimal number.  For example:

<blockquote><pre>
fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9
</pre></blockquote>








|







41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
determines which version of the documentation to display.

Fossil provides a variety of ways to specify a check-in.  This
document describes the various methods.

<h2>Canonical Check-in Name</h2>

The canonical name of a check-in is the SHA1 hash of its
[./fileformat.wiki#manifest | manifest] expressed as a 40-character
lowercase hexadecimal number.  For example:

<blockquote><pre>
fossil info e5a734a19a9826973e1d073b49dc2a16aa2308f9
</pre></blockquote>

115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142

The "tag:deed2" name will refer to the most recent check-in 
tagged with "deed2" not to the
check-in whose canonical name begins with "deed2".

<h2>Whole Branches</h2>

Usually whan a branch name is specified, it means the latest checkin on
that branch.  But for some commands (ex: [/help/purge|purge]) a branch name
on the argument means the earliest connected checkin on the branch.  This
seems confusing when being explained here, but it works out to be intuitive
in practice.

For example, the command "fossil purge XYZ" means to purge the checkin XYZ
and all of its descendents.  But when XYZ is in the form of a branch name, one
generally wants to purge the entire branch, not just the last checkin on the
branch.  And so for this reason, commands like purge will interpret a branch
name to be the first checkin of the branch rather than the last.  If there
are two or more branches with the same name, then these commands will select
the first check-in of the branch that has the most recent checkin.  What
happens is that Fossil searches for the most recent checkin with the given
tag, just as it always does.  But if that tag is a branch name, it then walks
back down the branch looking for the first check-in of that branch.

Again, this behavior only occurs on a few commands where it make sense.

<h2>Timestamps</h2>








|

|



|
|
|

|

|
|







115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142

The "tag:deed2" name will refer to the most recent check-in 
tagged with "deed2" not to the
check-in whose canonical name begins with "deed2".

<h2>Whole Branches</h2>

Usually when a branch name is specified, it means the latest check-in on
that branch.  But for some commands (ex: [/help/purge|purge]) a branch name
on the argument means the earliest connected check-in on the branch.  This
seems confusing when being explained here, but it works out to be intuitive
in practice.

For example, the command "fossil purge XYZ" means to purge the check-in XYZ
and all of its descendants.  But when XYZ is in the form of a branch name, one
generally wants to purge the entire branch, not just the last check-in on the
branch.  And so for this reason, commands like purge will interpret a branch
name to be the first check-in of the branch rather than the last.  If there
are two or more branches with the same name, then these commands will select
the first check-in of the branch that has the most recent check-in.  What
happens is that Fossil searches for the most recent check-in with the given
tag, just as it always does.  But if that tag is a branch name, it then walks
back down the branch looking for the first check-in of that branch.

Again, this behavior only occurs on a few commands where it make sense.

<h2>Timestamps</h2>

157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173

In its default configuration, Fossil interprets and displays all dates
in Universal Coordinated Time (UTC).  This tends to work the best for
distributed projects where participants are scattered around the globe.
But there is an option on the Admin/Timeline page of the web-interface to
switch to local time.  The "<b>Z</b>" suffix on an timestamp check-in
name is meaningless if Fossil is in the default mode of using UTC for
everything, but if Fossil has been switched to localtime mode, then the
"<b>Z</b>" suffix means to interpret that particular timestamp using 
UTC instead localtime.

For an example of how timestamps are useful, 
consider the homepage for the Fossil website itself:

<blockquote>
http://www.fossil-scm.org/fossil/doc/<b>trunk</b>/www/index.wiki
</blockquote>







|

|







157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173

In its default configuration, Fossil interprets and displays all dates
in Universal Coordinated Time (UTC).  This tends to work the best for
distributed projects where participants are scattered around the globe.
But there is an option on the Admin/Timeline page of the web-interface to
switch to local time.  The "<b>Z</b>" suffix on an timestamp check-in
name is meaningless if Fossil is in the default mode of using UTC for
everything, but if Fossil has been switched to local time mode, then the
"<b>Z</b>" suffix means to interpret that particular timestamp using 
UTC instead of local time.

For an example of how timestamps are useful, 
consider the homepage for the Fossil website itself:

<blockquote>
http://www.fossil-scm.org/fossil/doc/<b>trunk</b>/www/index.wiki
</blockquote>

Changes to www/concepts.wiki.

176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
    every artifact in a check-in.</li>
<li>The artifact ID of the manifest is the identifier of the check-in.</li>
</ul>

<h2>3.0 Fossil - The Program</h2>

Fossil is software.  The implementation of fossil is in the form
of a single executable named "fossil" (or "fossil.exe" on windows).
To install fossil on your system,
all you have to do is obtain a copy of this one executable file (either
by downloading a
<a href="http://www.fossil-scm.org/download.html">pre-compiled version</a>
or [./build.wiki | compiling it yourself]) and then
putting that file somewhere on your PATH.








|







176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
    every artifact in a check-in.</li>
<li>The artifact ID of the manifest is the identifier of the check-in.</li>
</ul>

<h2>3.0 Fossil - The Program</h2>

Fossil is software.  The implementation of fossil is in the form
of a single executable named "fossil" (or "fossil.exe" on Windows).
To install fossil on your system,
all you have to do is obtain a copy of this one executable file (either
by downloading a
<a href="http://www.fossil-scm.org/download.html">pre-compiled version</a>
or [./build.wiki | compiling it yourself]) and then
putting that file somewhere on your PATH.

Changes to www/contribute.wiki.

13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
the introduction of code with incompatible licenses or other entanglements
that might cause legal problems for Fossil users.  Many larger companies
and other lawyer-rich organizations require this as a precondition to using 
Fossil.

If you do not wish to submit a Contributor Agreement, we would still
welcome your suggestions and example code, but we will not use your code
directly - we will be forced to reimplement your changes from scratch which
might take longer.

<h2>2.0 Submitting Patches</h2>

Suggested changes or bug fixes can be submitted by creating a patch
against the current source tree.  Email patches to 
<a href="mailto:drh@sqlite.org">drh@sqlite.org</a>.  Be sure to 







|







13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
the introduction of code with incompatible licenses or other entanglements
that might cause legal problems for Fossil users.  Many larger companies
and other lawyer-rich organizations require this as a precondition to using 
Fossil.

If you do not wish to submit a Contributor Agreement, we would still
welcome your suggestions and example code, but we will not use your code
directly - we will be forced to re-implement your changes from scratch which
might take longer.

<h2>2.0 Submitting Patches</h2>

Suggested changes or bug fixes can be submitted by creating a patch
against the current source tree.  Email patches to 
<a href="mailto:drh@sqlite.org">drh@sqlite.org</a>.  Be sure to 
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
A contributor agreement is, of course, a prerequisite for check-in
privileges.</p>

Contributors are asked to make all non-trivial changes on a branch.  The
Fossil Architect (Richard Hipp) will merge changes onto the trunk.</p>

Contributors are required to following the
[./checkin.wiki | pre-checkin checklist] prior to every checkin to
the Fossil self-hosting repository.  This checklist is short and succinct
and should only require a few seconds to follow.  Contributors 
should print out a copy of the pre-checkin checklist and keep
it on a notecard beside their workstations, for quick reference.

Contributors should review the
[./style.wiki | Coding Style Guidelines] and mimic the coding style







|







49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
A contributor agreement is, of course, a prerequisite for check-in
privileges.</p>

Contributors are asked to make all non-trivial changes on a branch.  The
Fossil Architect (Richard Hipp) will merge changes onto the trunk.</p>

Contributors are required to following the
[./checkin.wiki | pre-checkin checklist] prior to every check-in to
the Fossil self-hosting repository.  This checklist is short and succinct
and should only require a few seconds to follow.  Contributors 
should print out a copy of the pre-checkin checklist and keep
it on a notecard beside their workstations, for quick reference.

Contributors should review the
[./style.wiki | Coding Style Guidelines] and mimic the coding style

Changes to www/copyright-release.html.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<h1 align="center">
Fossil SCM Contributor Agreement
</h1>

<p>
This agreement applies to your contribution of material to the
Fossil Software Configuration Management System ("Fossil") that is
mananged by Hipp, Wyrick &amp; Company, Inc. ("Hwaci") and
sets out the intellectual property rights you grant to Hwaci in the
contributed material.
The terms "contribution" and "contributed material" mean any source code, 
object code, patch, tool, sample, graphic, specification, manual, 
documentation, or any other material posted, submitted, or uploaded by 
you to the Fossil project.
 The term "you" means the person identified







|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<h1 align="center">
Fossil SCM Contributor Agreement
</h1>

<p>
This agreement applies to your contribution of material to the
Fossil Software Configuration Management System ("Fossil") that is
managed by Hipp, Wyrick &amp; Company, Inc. ("Hwaci") and
sets out the intellectual property rights you grant to Hwaci in the
contributed material.
The terms "contribution" and "contributed material" mean any source code, 
object code, patch, tool, sample, graphic, specification, manual, 
documentation, or any other material posted, submitted, or uploaded by 
you to the Fossil project.
 The term "you" means the person identified
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
<ul>
<li> Your contribution is an original work and that you can legally
     grant the rights set out in this agreement.
<li> Your contribution does not, to the best of your knowledge and belief,
     violate any third party's copyrights, trademarks, patents,
     or other intellectual property rights.
<li> You are authorized to sign this agreement on behalf of your
     company (if appliable).
</ul>
</ol>

<p>By filling in the following information and signing your name,
you agree to be bound by all of the terms
set forth in this agreement.  Please print clearly.</p>
 







|







64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
<ul>
<li> Your contribution is an original work and that you can legally
     grant the rights set out in this agreement.
<li> Your contribution does not, to the best of your knowledge and belief,
     violate any third party's copyrights, trademarks, patents,
     or other intellectual property rights.
<li> You are authorized to sign this agreement on behalf of your
     company (if applicable).
</ul>
</ol>

<p>By filling in the following information and signing your name,
you agree to be bound by all of the terms
set forth in this agreement.  Please print clearly.</p>
 

Changes to www/delta_format.wiki.

153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
<tr><td>&nbsp;</td> <td>1B@Jd,	         </td><td>Copy    </td><td> 75 @ 1256	     </td></tr>
<tr><td>&nbsp;</td> <td>6:scenda         </td><td>Literal </td><td> 6 'scenda'	     </td></tr>
<tr><td>&nbsp;</td> <td>5x@Kt,	         </td><td>Copy    </td><td> 380 @ 1336	     </td></tr>
<tr><td>&nbsp;</td> <td>6:pieces	 </td><td>Literal </td><td> 6 'pieces'	     </td></tr>
<tr><td>&nbsp;</td> <td>79@Qt,	         </td><td>Copy    </td><td> 457 @ 1720     </td></tr>
<tr><td>&nbsp;</td> <td>F: Example: eskil</td><td>Literal </td><td> 15 ' Example: eskil'</td></tr>
<tr><td>&nbsp;</td> <td>~E@Y0,           </td><td>Copy    </td><td>  4046 @ 2176        </td></tr>
<tr><td>Trailer</td><td>2zMM3E           </td><td>Ckecksum</td><td> -1101438770         </td></tr>
</table>

<p>The unified diff behind the above delta is</p>

<table border=1><tr><td><pre>
bluepeak:(761) ~/Projects/Tcl/Fossil/Devel/devel > diff -u ../DELTA/old ../DELTA/new 
--- ../DELTA/old        2007-08-23 21:14:40.000000000 -0700







|







153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
<tr><td>&nbsp;</td> <td>1B@Jd,	         </td><td>Copy    </td><td> 75 @ 1256	     </td></tr>
<tr><td>&nbsp;</td> <td>6:scenda         </td><td>Literal </td><td> 6 'scenda'	     </td></tr>
<tr><td>&nbsp;</td> <td>5x@Kt,	         </td><td>Copy    </td><td> 380 @ 1336	     </td></tr>
<tr><td>&nbsp;</td> <td>6:pieces	 </td><td>Literal </td><td> 6 'pieces'	     </td></tr>
<tr><td>&nbsp;</td> <td>79@Qt,	         </td><td>Copy    </td><td> 457 @ 1720     </td></tr>
<tr><td>&nbsp;</td> <td>F: Example: eskil</td><td>Literal </td><td> 15 ' Example: eskil'</td></tr>
<tr><td>&nbsp;</td> <td>~E@Y0,           </td><td>Copy    </td><td>  4046 @ 2176        </td></tr>
<tr><td>Trailer</td><td>2zMM3E           </td><td>Checksum</td><td> -1101438770         </td></tr>
</table>

<p>The unified diff behind the above delta is</p>

<table border=1><tr><td><pre>
bluepeak:(761) ~/Projects/Tcl/Fossil/Devel/devel > diff -u ../DELTA/old ../DELTA/new 
--- ../DELTA/old        2007-08-23 21:14:40.000000000 -0700

Changes to www/faq.tcl.

82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
  </blockquote>

  The CHECK-IN in the previous line can be any
  [./checkin_names.wiki | valid check-in name format].

  You can also add (and remove) tags from a check-in using the
  [./webui.wiki | web interface].  First locate the check-in that you
  what to tag on the tmline, then click on the link to go the detailed
  information page for that check-in.  Then find the "<b>edit</b>"
  link (near the "Commands:" label) and click on that.  There are
  controls on the edit page that allow new tags to be added and existing
  tags to be removed.
}

faq {
  How do I create a private branch that won't get pushed back to the
  main repository.
} {
  Use the <b>--private</b> command-line option on the
  <b>commit</b> command.  The result will be a check-in which exists on
  your local repository only and is never pushed to other repositories.
  All descendents of a private check-in are also private.

  Unless you specify something different using the <b>--branch</b> and/or
  <b>--bgcolor</b> options, the new private check-in will be put on a branch
  named "private" with an orange background color.

  You can merge from the trunk into your private branch in order to keep
  your private branch in sync with the latest changes on the trunk.  Once







|













|







82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
  </blockquote>

  The CHECK-IN in the previous line can be any
  [./checkin_names.wiki | valid check-in name format].

  You can also add (and remove) tags from a check-in using the
  [./webui.wiki | web interface].  First locate the check-in that you
  what to tag on the timeline, then click on the link to go the detailed
  information page for that check-in.  Then find the "<b>edit</b>"
  link (near the "Commands:" label) and click on that.  There are
  controls on the edit page that allow new tags to be added and existing
  tags to be removed.
}

faq {
  How do I create a private branch that won't get pushed back to the
  main repository.
} {
  Use the <b>--private</b> command-line option on the
  <b>commit</b> command.  The result will be a check-in which exists on
  your local repository only and is never pushed to other repositories.
  All descendants of a private check-in are also private.

  Unless you specify something different using the <b>--branch</b> and/or
  <b>--bgcolor</b> options, the new private check-in will be put on a branch
  named "private" with an orange background color.

  You can merge from the trunk into your private branch in order to keep
  your private branch in sync with the latest changes on the trunk.  Once

Changes to www/faq.wiki.

85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
</blockquote>

The CHECK-IN in the previous line can be any
[./checkin_names.wiki | valid check-in name format].

You can also add (and remove) tags from a check-in using the
[./webui.wiki | web interface].  First locate the check-in that you 
what to tag on the tmline, then click on the link to go the detailed
information page for that check-in.  Then find the "<b>edit</b>"
link (near the "Commands:" label) and click on that.  There are
controls on the edit page that allow new tags to be added and existing
tags to be removed.</blockquote></li>

<a name="q5"></a>
<p><b>(5) How do I create a private branch that won't get pushed back to the
  main repository.</b></p>

<blockquote>Use the <b>--private</b> command-line option on the 
<b>commit</b> command.  The result will be a check-in which exists on
your local repository only and is never pushed to other repositories.  
All descendents of a private check-in are also private.

Unless you specify something different using the <b>--branch</b> and/or
<b>--bgcolor</b> options, the new private check-in will be put on a branch
named "private" with an orange background color.

You can merge from the trunk into your private branch in order to keep
your private branch in sync with the latest changes on the trunk.  Once







|












|







85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
</blockquote>

The CHECK-IN in the previous line can be any
[./checkin_names.wiki | valid check-in name format].

You can also add (and remove) tags from a check-in using the
[./webui.wiki | web interface].  First locate the check-in that you 
what to tag on the timeline, then click on the link to go the detailed
information page for that check-in.  Then find the "<b>edit</b>"
link (near the "Commands:" label) and click on that.  There are
controls on the edit page that allow new tags to be added and existing
tags to be removed.</blockquote></li>

<a name="q5"></a>
<p><b>(5) How do I create a private branch that won't get pushed back to the
  main repository.</b></p>

<blockquote>Use the <b>--private</b> command-line option on the 
<b>commit</b> command.  The result will be a check-in which exists on
your local repository only and is never pushed to other repositories.  
All descendants of a private check-in are also private.

Unless you specify something different using the <b>--branch</b> and/or
<b>--bgcolor</b> options, the new private check-in will be put on a branch
named "private" with an orange background color.

You can merge from the trunk into your private branch in order to keep
your private branch in sync with the latest changes on the trunk.  Once

Changes to www/fileformat.wiki.

111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
</blockquote>

A manifest may optionally have a single B-card.  The B-card specifies
another manifest that serves as the "baseline" for this manifest.  A
manifest that has a B-card is called a delta-manifest and a manifest
that omits the B-card is a baseline-manifest.  The other manifest
identified by the argument of the B-card must be a baseline-manifest.
A baseline-manifest records the complete contents of a checkin.
A delta-manifest records only changes from its baseline.  

A manifest must have exactly one C-card.  The sole argument to
the C-card is a check-in comment that describes the check-in that
the manifest defines.  The check-in comment is text.  The following
escape sequences are applied to the text:
A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73).  A







|







111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
</blockquote>

A manifest may optionally have a single B-card.  The B-card specifies
another manifest that serves as the "baseline" for this manifest.  A
manifest that has a B-card is called a delta-manifest and a manifest
that omits the B-card is a baseline-manifest.  The other manifest
identified by the argument of the B-card must be a baseline-manifest.
A baseline-manifest records the complete contents of a check-in.
A delta-manifest records only changes from its baseline.  

A manifest must have exactly one C-card.  The sole argument to
the C-card is a check-in comment that describes the check-in that
the manifest defines.  The check-in comment is text.  The following
escape sequences are applied to the text:
A space (ASCII 0x20) is represented as "\s" (ASCII 0x5C, 0x73).  A
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
some other artifact.  The T card has two or three values.  The
second argument is the 40 character lowercase artifact ID of the artifact
to which the tag is to be applied. The
first value is the tag name.  The first character of the tag
is either "+", "-", or "*".  The "+" means the tag should be added
to the artifact.  The "-" means the tag should be removed.
The "*" character means the tag should be added to the artifact
and all direct descendants (but not descendents through a merge) down
to but not including the first descendant that contains a 
more recent "-", "*", or "+" tag with the same name.
The optional third argument is the value of the tag.  A tag
without a value is a boolean.

When two or more tags with the same name are applied to the
same artifact, the tag with the latest (most recent) date is
used.

Some tags have special meaning.  The "comment" tag when applied
to a check-in will override the check-in comment of that check-in







|



|







311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
some other artifact.  The T card has two or three values.  The
second argument is the 40 character lowercase artifact ID of the artifact
to which the tag is to be applied. The
first value is the tag name.  The first character of the tag
is either "+", "-", or "*".  The "+" means the tag should be added
to the artifact.  The "-" means the tag should be removed.
The "*" character means the tag should be added to the artifact
and all direct descendants (but not descendants through a merge) down
to but not including the first descendant that contains a 
more recent "-", "*", or "+" tag with the same name.
The optional third argument is the value of the tag.  A tag
without a value is a Boolean.

When two or more tags with the same name are applied to the
same artifact, the tag with the latest (most recent) date is
used.

Some tags have special meaning.  The "comment" tag when applied
to a check-in will override the check-in comment of that check-in

Changes to www/fiveminutes.wiki.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
<title>Up and running in 5 minutes as a single user</title>

<p align="center"><b><i>
The following document was contributed by Gilles Ganault on 2013-01-08.
</i></b>
</p><hr>

<h1>Up and running in 5 minutes as a single user</h1>
<p>This short document explains the main basic Fossil commands for a single 
user, ie. with no additional users, with no need to synchronize with some remote 
repository, and no need for branching/forking.</p>

<h2>Create a new repository</h2>
<p>fossil new c:\test.repo</p>
<p>This will create the new SQLite binary file that holds the repository, ie. 
files, tickets, wiki, etc. It can be located anywhere, although it's considered 
best practise to keep it outside the work directory where you will work on files 
after they've been checked out of the repository.</p>

<h2>Open the repository</h2>
<p>cd c:\temp\test.fossil</p>
<p>fossil open c:\test.repo</p>
<p>This will check out the last revision of all the files in the repository, 
if any, into the current work directory. In addition, it will create a binary 
file _FOSSIL_ to keep track of changes (on non-Windows systems it is called
<tt>.fslckout</tt>).</p>

<h2>Add new files</h2>
<p>fossil add .</p>
<p>To tell Fossil to add new files to the repository. The files aren't actually 
added until you run &quot;commit&quot;. When using &quot;.&quot;, it tells Fossil 
to add all the files in the current directory recursively, ie. including all 
the files in all the subdirectories.</p>
<p>Note: To tell Fossil to ignore some extensions:</p>
<p>fossil settings ignore-glob &quot;*.o,*.obj,*.exe&quot; --global</p>

<h2>Remove files that haven't been commited yet</h2>
<p>fossil delete myfile.c</p>
<p>This will simply remove the item from the list of files that were previously 
added through &quot;fossil add&quot;.</p>

<h2>Check current status</h2>
<p>fossil changes</p>
<p>This shows the list of changes that have been done and will be commited the 
next time you run &quot;fossil commit&quot;. It's a useful command to run before 
running &quot;fossil commit&quot; just to check that things are OK before proceeding.</p>

<h2>Commit changes</h2>
<p>To actually apply the pending changes to the repository, eg. new files marked 
for addition, checked-out files that have been edited and must be checked-in, 
etc.</p>

<p>fossil commit -m "Added stuff"</p>

If no file names are provided on the command-line then all changes will be checked in,
otherwise just the listed file(s) will be checked in.

<h2>Compare two revisions of a file</h2>
<p>If you wish to compare the last revision of a file and its checked out version 
in your work directory:</p>
<p>fossil gdiff myfile.c</p>
<p>If you wish to compare two different revisions of a file in the repository:</p>
<p>fossil finfo myfile: Note the first hash, which is the UUID of the commit 
when the file was commited</p>
<p>fossil gdiff --from UUID#1 --to UUID#2 myfile.c</p>
<h2>Cancel changes and go back to previous revision</h2>
<p>fossil revert myfile.c</p>
<p>Fossil does not prompt when reverting a file. It simply reminds the user about the
"undo" command, just in case the revert was a mistake.</p>


<h2>Close the repository</h2>
<p>fossil close</p>
<p>This will simply remove the _FOSSIL_ at the root of the work directory but 
will not delete the files in the work directory. From then on, any use of &quot;fossil&quot; 
will trigger an error since there is no longer any connection.</p>









|




|

|














|




|






|




|














|












1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
<title>Up and running in 5 minutes as a single user</title>

<p align="center"><b><i>
The following document was contributed by Gilles Ganault on 2013-01-08.
</i></b>
</p><hr>

<h1>Up and running in 5 minutes as a single user</h1>
<p>This short document explains the main basic Fossil commands for a single 
user, i.e. with no additional users, with no need to synchronize with some remote 
repository, and no need for branching/forking.</p>

<h2>Create a new repository</h2>
<p>fossil new c:\test.repo</p>
<p>This will create the new SQLite binary file that holds the repository, i.e. 
files, tickets, wiki, etc. It can be located anywhere, although it's considered 
best practice to keep it outside the work directory where you will work on files 
after they've been checked out of the repository.</p>

<h2>Open the repository</h2>
<p>cd c:\temp\test.fossil</p>
<p>fossil open c:\test.repo</p>
<p>This will check out the last revision of all the files in the repository, 
if any, into the current work directory. In addition, it will create a binary 
file _FOSSIL_ to keep track of changes (on non-Windows systems it is called
<tt>.fslckout</tt>).</p>

<h2>Add new files</h2>
<p>fossil add .</p>
<p>To tell Fossil to add new files to the repository. The files aren't actually 
added until you run &quot;commit&quot;. When using &quot;.&quot;, it tells Fossil 
to add all the files in the current directory recursively, i.e. including all 
the files in all the subdirectories.</p>
<p>Note: To tell Fossil to ignore some extensions:</p>
<p>fossil settings ignore-glob &quot;*.o,*.obj,*.exe&quot; --global</p>

<h2>Remove files that haven't been committed yet</h2>
<p>fossil delete myfile.c</p>
<p>This will simply remove the item from the list of files that were previously 
added through &quot;fossil add&quot;.</p>

<h2>Check current status</h2>
<p>fossil changes</p>
<p>This shows the list of changes that have been done and will be committed the 
next time you run &quot;fossil commit&quot;. It's a useful command to run before 
running &quot;fossil commit&quot; just to check that things are OK before proceeding.</p>

<h2>Commit changes</h2>
<p>To actually apply the pending changes to the repository, e.g. new files marked 
for addition, checked-out files that have been edited and must be checked-in, 
etc.</p>

<p>fossil commit -m "Added stuff"</p>

If no file names are provided on the command-line then all changes will be checked in,
otherwise just the listed file(s) will be checked in.

<h2>Compare two revisions of a file</h2>
<p>If you wish to compare the last revision of a file and its checked out version 
in your work directory:</p>
<p>fossil gdiff myfile.c</p>
<p>If you wish to compare two different revisions of a file in the repository:</p>
<p>fossil finfo myfile: Note the first hash, which is the UUID of the commit 
when the file was committed</p>
<p>fossil gdiff --from UUID#1 --to UUID#2 myfile.c</p>
<h2>Cancel changes and go back to previous revision</h2>
<p>fossil revert myfile.c</p>
<p>Fossil does not prompt when reverting a file. It simply reminds the user about the
"undo" command, just in case the revert was a mistake.</p>


<h2>Close the repository</h2>
<p>fossil close</p>
<p>This will simply remove the _FOSSIL_ at the root of the work directory but 
will not delete the files in the work directory. From then on, any use of &quot;fossil&quot; 
will trigger an error since there is no longer any connection.</p>

Changes to www/fossil-v-git.wiki.

54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
uncommon for no repository in the project to hold all the different code
versions for a project.  Instead the information is distributed.
Individual developers have one or more private branches.  A hierarchy
of integrators merge changes from individual developers into collaborative
branches, until all the changes are merged together at the top-level master
branch.  And all of this can be accomplished without having to have all the
code in any one repository.  Developers or groups of developers can share
only those branches that they want to share and keep other branchs of the
project private.  This is analogous to sharding an a distributed database.

Fossil allows private branches, but its default mode is to share everything.
And so in a Fossil project, all repositories tend to contain all of the
content at all times.  This is analogous to replication in a
distributed database.

The Git model works best for large projects, like the
Linux kernel for which Git was designed.
Linus Torvalds does not need or want to see a thousand
different branches, one for each contributor.  Git allows intermediary
"gate-keepers" to merge changes from multiple lower-level developers
into a single branch and only present Linus with a handful of branches
at a time.  Git encourages a programming model where each developer
works in his or her own branch and then merges changes up the hierarchy
until they reach the master branch.

Fossil is designed for smaller and non-hierarchical teams where all
developers are operating directly on the master branch, or at most
a small number of well defined branches.
The [./concepts.wiki#workflow | autosync] mode of Fossil makes it easy
for multiple developers to work on a single branch and maintain
linear development on that branch and avoid needless forking
and merging.

<h3>3.3 Branches</h3>








|
|


















|







54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
uncommon for no repository in the project to hold all the different code
versions for a project.  Instead the information is distributed.
Individual developers have one or more private branches.  A hierarchy
of integrators merge changes from individual developers into collaborative
branches, until all the changes are merged together at the top-level master
branch.  And all of this can be accomplished without having to have all the
code in any one repository.  Developers or groups of developers can share
only those branches that they want to share and keep other branches of the
project private.  This is analogous to sharding a distributed database.

Fossil allows private branches, but its default mode is to share everything.
And so in a Fossil project, all repositories tend to contain all of the
content at all times.  This is analogous to replication in a
distributed database.

The Git model works best for large projects, like the
Linux kernel for which Git was designed.
Linus Torvalds does not need or want to see a thousand
different branches, one for each contributor.  Git allows intermediary
"gate-keepers" to merge changes from multiple lower-level developers
into a single branch and only present Linus with a handful of branches
at a time.  Git encourages a programming model where each developer
works in his or her own branch and then merges changes up the hierarchy
until they reach the master branch.

Fossil is designed for smaller and non-hierarchical teams where all
developers are operating directly on the master branch, or at most
a small number of well-defined branches.
The [./concepts.wiki#workflow | autosync] mode of Fossil makes it easy
for multiple developers to work on a single branch and maintain
linear development on that branch and avoid needless forking
and merging.

<h3>3.3 Branches</h3>

113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
and the changes stay integrated naturally.

So to a first approximation, branches in Git are developer-centric whereas
branches in Fossil are feature-centric.

The Git approach scales much better for large projects like the Linux
kernel with thousands of contributors who in many cases don't even know
each others names.  The integrators serve a gatekeeper role to help keep
undesirable code out of the official Linux source tree.  On the other hand,
not many projects are as big or as loosely organized as the Linux kernel.
Most projects have a small team of developers who all know each other
well and trust each other, and who enjoy working together collaboratively
without the overhead and hierarchy of integrators.

One consequence of the "everybody-sees-everything" focus of Fossil is that







|







113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
and the changes stay integrated naturally.

So to a first approximation, branches in Git are developer-centric whereas
branches in Fossil are feature-centric.

The Git approach scales much better for large projects like the Linux
kernel with thousands of contributors who in many cases don't even know
each other's names.  The integrators serve a gatekeeper role to help keep
undesirable code out of the official Linux source tree.  On the other hand,
not many projects are as big or as loosely organized as the Linux kernel.
Most projects have a small team of developers who all know each other
well and trust each other, and who enjoy working together collaboratively
without the overhead and hierarchy of integrators.

One consequence of the "everybody-sees-everything" focus of Fossil is that
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
Git features the "rebase" command which can be used to change the
sequence of check-ins in the repository.  Rebase can be used to "clean up"
a complex sequence of check-ins to make their intent easier for others
to understand.  This is important if you view the history of a project
as part of the documentation for the project.

Fossil takes an opposing view.  Fossil views history as sacrosanct and
stubornly refuses to change it.
Fossil allows mistakes to be corrected (for example, check-in comments
can be revised, and check-ins can be moved onto new branches even after
the check-in has occurred) but the correction is an addition to the repository
and the original actions are preserved and displayed alongside
the corrections, thus preserving an historically accurate audit trail.
This is analogous to an accounting practice of marking through an incorrect
entry in a ledger and writing a correction beside it.







|







218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
Git features the "rebase" command which can be used to change the
sequence of check-ins in the repository.  Rebase can be used to "clean up"
a complex sequence of check-ins to make their intent easier for others
to understand.  This is important if you view the history of a project
as part of the documentation for the project.

Fossil takes an opposing view.  Fossil views history as sacrosanct and
stubbornly refuses to change it.
Fossil allows mistakes to be corrected (for example, check-in comments
can be revised, and check-ins can be moved onto new branches even after
the check-in has occurred) but the correction is an addition to the repository
and the original actions are preserved and displayed alongside
the corrections, thus preserving an historically accurate audit trail.
This is analogous to an accounting practice of marking through an incorrect
entry in a ledger and writing a correction beside it.

Changes to www/hints.wiki.

19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
      edits in any of your Fossil projects.  Use 
      "[/help?cmd=all | fossil all pull]" on your laptop
      prior to going off network (for example, on a long plane ride)
      to make sure you have all the latest content locally.  Then run
      "[/help/all|fossil all push]" when you get back online to upload
      your changes.
  
  5.  Sub-menu options on Timelines lets you select either 20 or 200
      records.  But you can manual edit the "n=" query parameter in the
      URL to get any number of records you desire.  To see a complete
      timeline graph, set n to some ridiculously large value like 10000000.
  
  6.  You can manually add a "c=CHECKIN" query parameter to the timeline
      URL to get a snapshot of what was going on about the time of some
      checkin.  The "CHECKIN" can be
      [./checkin_names.wiki | any valid check-in or version name], including
      tags, branch names, and dates.  For example, to see what was going 
      on in the Fossil repository on 2008-01-01, visit
      [http://www.fossil-scm.org/fossil/timeline?c=2008-01-01].
  
  7.  Further to the previous two hints, there are lots of query parameters
      that you can add to timeline pages.  The available query parameters
      are tersely documented [/help?cmd=/timeline | here].
  
  8.  You can run "[/help?cmd=test-diff | fossil test-diff --tk $file1 $file2]"
      to get a pop-up  window with side-by-side diffs of two files, even if 
      neither of the two files is part of any Fossil repository.  Note that 
      this command is "test-diff", not "diff".
  
  9.  On web pages showing the content of a file (for example
      [http://www.fossil-scm.org/fossil/artifact/c7dd1de9f]) you can manually
      add a query parameter of the form "ln=FROM,TO" to the URL that
      will cause the range of lines indicated to be highlighted.  This
      is useful in pointing out a few lines of code using a hyperlink
      in a email or text message.  Example:
      [http://www.fossil-scm.org/fossil/artifact/c7dd1de9f?ln=28,30].
      Adding the "ln" query parameter without any argument simply turns
      on line numbers.   This feature only works right with files with
      a mimetype of text/plain, of course.
  
  10.  When editing documentation to be checked in as managed files, you can
       preview what the documentation will look like by using the special
       "ckout" branch name in the "doc" URL while running "fossil ui".
       See the [./embeddeddoc.wiki | embedded documentation] for details.







<
<
|
<



|



















|









19
20
21
22
23
24
25


26

27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
      edits in any of your Fossil projects.  Use 
      "[/help?cmd=all | fossil all pull]" on your laptop
      prior to going off network (for example, on a long plane ride)
      to make sure you have all the latest content locally.  Then run
      "[/help/all|fossil all push]" when you get back online to upload
      your changes.
  


  5.  To see an entire timeline, type "all" into the "Max:" entry box.

  
  6.  You can manually add a "c=CHECKIN" query parameter to the timeline
      URL to get a snapshot of what was going on about the time of some
      check-in.  The "CHECKIN" can be
      [./checkin_names.wiki | any valid check-in or version name], including
      tags, branch names, and dates.  For example, to see what was going 
      on in the Fossil repository on 2008-01-01, visit
      [http://www.fossil-scm.org/fossil/timeline?c=2008-01-01].
  
  7.  Further to the previous two hints, there are lots of query parameters
      that you can add to timeline pages.  The available query parameters
      are tersely documented [/help?cmd=/timeline | here].
  
  8.  You can run "[/help?cmd=test-diff | fossil test-diff --tk $file1 $file2]"
      to get a pop-up  window with side-by-side diffs of two files, even if 
      neither of the two files is part of any Fossil repository.  Note that 
      this command is "test-diff", not "diff".
  
  9.  On web pages showing the content of a file (for example
      [http://www.fossil-scm.org/fossil/artifact/c7dd1de9f]) you can manually
      add a query parameter of the form "ln=FROM,TO" to the URL that
      will cause the range of lines indicated to be highlighted.  This
      is useful in pointing out a few lines of code using a hyperlink
      in an email or text message.  Example:
      [http://www.fossil-scm.org/fossil/artifact/c7dd1de9f?ln=28,30].
      Adding the "ln" query parameter without any argument simply turns
      on line numbers.   This feature only works right with files with
      a mimetype of text/plain, of course.
  
  10.  When editing documentation to be checked in as managed files, you can
       preview what the documentation will look like by using the special
       "ckout" branch name in the "doc" URL while running "fossil ui".
       See the [./embeddeddoc.wiki | embedded documentation] for details.

Changes to www/index.wiki.

12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
<li> [./changes.wiki | Change Log]
<li> [./hacker-howto.wiki | Hacker How-To]
<li> [./hints.wiki | Tip &amp; Hints]
<li> [./permutedindex.html | Documentation Index]
<li> [http://www.fossil-scm.org/schimpf-book/home | Jim Schimpf's book]
<li> Mailing list
     <ul>
     <li> [http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users | sign-up]
     <li> [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org | archives]
     </ul>
</ul>
<center><img src="fossil3.gif"></center>
</div>

<p>Fossil is a simple, high-reliability, distributed software configuration
management with these advanced features:

  1.  <b>Integrated Bug Tracking, Wiki, and Technotes</b> -
      In addition to doing [./concepts.wiki | distributed version control]
      like Git and Mercurial,
      Fossil also supports [./bugtheory.wiki | bug tracking],
      [./wikitheory.wiki | wiki], and [./event.wiki | technotes].








|







|







12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
<li> [./changes.wiki | Change Log]
<li> [./hacker-howto.wiki | Hacker How-To]
<li> [./hints.wiki | Tip &amp; Hints]
<li> [./permutedindex.html | Documentation Index]
<li> [http://www.fossil-scm.org/schimpf-book/home | Jim Schimpf's book]
<li> Mailing list
     <ul>
     <li> [http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/fossil-users | sign-up]
     <li> [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org | archives]
     </ul>
</ul>
<center><img src="fossil3.gif"></center>
</div>

<p>Fossil is a simple, high-reliability, distributed software configuration
management system with these advanced features:

  1.  <b>Integrated Bug Tracking, Wiki, and Technotes</b> -
      In addition to doing [./concepts.wiki | distributed version control]
      like Git and Mercurial,
      Fossil also supports [./bugtheory.wiki | bug tracking],
      [./wikitheory.wiki | wiki], and [./event.wiki | technotes].

Changes to www/inout.wiki.

45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
those concepts.

As with the "import" command, the --git option is not required
since the git-fast-export file format is currently the only VCS interchange 
format that Fossil will generate.  However,
future versions of Fossil might add the ability to generate other
VCS interchange formats, and so for compatibility, the use of the --git 
option recommented.

An anonymous user sends this comment:

<blockquote>
The main Fossil branch is called "trunk", while the main git branch is
called "master". After you've exported your FOSSIL repo to git, you won't 
see any files and gitk will complain about a missing "HEAD". You can 
resolve this problem by merging "trunk" with "master"
(first verify using git status that you are on the "master" branch): 
<tt>git merge trunk</tt>
</blockquote>







|











45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
those concepts.

As with the "import" command, the --git option is not required
since the git-fast-export file format is currently the only VCS interchange 
format that Fossil will generate.  However,
future versions of Fossil might add the ability to generate other
VCS interchange formats, and so for compatibility, the use of the --git 
option recommended.

An anonymous user sends this comment:

<blockquote>
The main Fossil branch is called "trunk", while the main git branch is
called "master". After you've exported your FOSSIL repo to git, you won't 
see any files and gitk will complain about a missing "HEAD". You can 
resolve this problem by merging "trunk" with "master"
(first verify using git status that you are on the "master" branch): 
<tt>git merge trunk</tt>
</blockquote>

Changes to www/mkindex.tcl.

9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
set doclist {
  adding_code.wiki {Adding New Features To Fossil}
  adding_code.wiki {Hacking Fossil}
  antibot.wiki {Defense against Spiders and Bots}
  bugtheory.wiki {Bug Tracking In Fossil}
  branching.wiki {Branching, Forking, Merging, and Tagging}
  build.wiki {Compiling and Installing Fossil}
  checkin_names.wiki {Checkin And Version Names}
  checkin.wiki {Check-in Checklist}
  changes.wiki {Fossil Changelog}
  copyright-release.html {Contributor License Agreement}
  concepts.wiki {Fossil Core Concepts}
  contribute.wiki {Contributing Code or Documentation To The Fossil Project}
  customskin.md {Theming: Customizing The Appearance of Web Pages}
  custom_ticket.wiki {Customizing The Ticket System}







|







9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
set doclist {
  adding_code.wiki {Adding New Features To Fossil}
  adding_code.wiki {Hacking Fossil}
  antibot.wiki {Defense against Spiders and Bots}
  bugtheory.wiki {Bug Tracking In Fossil}
  branching.wiki {Branching, Forking, Merging, and Tagging}
  build.wiki {Compiling and Installing Fossil}
  checkin_names.wiki {Check-in And Version Names}
  checkin.wiki {Check-in Checklist}
  changes.wiki {Fossil Changelog}
  copyright-release.html {Contributor License Agreement}
  concepts.wiki {Fossil Core Concepts}
  contribute.wiki {Contributing Code or Documentation To The Fossil Project}
  customskin.md {Theming: Customizing The Appearance of Web Pages}
  custom_ticket.wiki {Customizing The Ticket System}

Changes to www/permutedindex.html.

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
<li><a href="antibot.wiki">Bots &mdash; Defense against Spiders and</a></li>
<li><a href="private.wiki">Branches &mdash; Creating, Syncing, and Deleting Private</a></li>
<li><a href="branching.wiki">Branching, Forking, Merging, and Tagging</a></li>
<li><a href="bugtheory.wiki">Bug Tracking In Fossil</a></li>
<li><a href="makefile.wiki">Build Process &mdash; The Fossil</a></li>
<li><a href="changes.wiki">Changelog &mdash; Fossil</a></li>
<li><a href="checkin.wiki">Check-in Checklist</a></li>
<li><a href="checkin_names.wiki">Checkin And Version Names</a></li>
<li><a href="checkin.wiki">Checklist &mdash; Check-in</a></li>
<li><a href="../test/release-checklist.wiki">Checklist &mdash; Pre-Release Testing</a></li>
<li><a href="foss-cklist.wiki">Checklist For Successful Open-Source Projects</a></li>
<li><a href="selfcheck.wiki">Checks &mdash; Fossil Repository Integrity Self</a></li>
<li><a href="contribute.wiki">Code or Documentation To The Fossil Project &mdash; Contributing</a></li>
<li><a href="style.wiki">Code Style Guidelines &mdash; Source</a></li>
<li><a href="build.wiki">Compiling and Installing Fossil</a></li>







|







33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
<li><a href="antibot.wiki">Bots &mdash; Defense against Spiders and</a></li>
<li><a href="private.wiki">Branches &mdash; Creating, Syncing, and Deleting Private</a></li>
<li><a href="branching.wiki">Branching, Forking, Merging, and Tagging</a></li>
<li><a href="bugtheory.wiki">Bug Tracking In Fossil</a></li>
<li><a href="makefile.wiki">Build Process &mdash; The Fossil</a></li>
<li><a href="changes.wiki">Changelog &mdash; Fossil</a></li>
<li><a href="checkin.wiki">Check-in Checklist</a></li>
<li><a href="checkin_names.wiki">Check-in And Version Names</a></li>
<li><a href="checkin.wiki">Checklist &mdash; Check-in</a></li>
<li><a href="../test/release-checklist.wiki">Checklist &mdash; Pre-Release Testing</a></li>
<li><a href="foss-cklist.wiki">Checklist For Successful Open-Source Projects</a></li>
<li><a href="selfcheck.wiki">Checks &mdash; Fossil Repository Integrity Self</a></li>
<li><a href="contribute.wiki">Code or Documentation To The Fossil Project &mdash; Contributing</a></li>
<li><a href="style.wiki">Code Style Guidelines &mdash; Source</a></li>
<li><a href="build.wiki">Compiling and Installing Fossil</a></li>
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
<li><a href="webui.wiki">Interface &mdash; The Fossil Web</a></li>
<li><a href="th1.md">Language &mdash; The TH1 Scripting</a></li>
<li><a href="copyright-release.html">License Agreement &mdash; Contributor</a></li>
<li><a href="password.wiki">Management And Authentication &mdash; Password</a></li>
<li><a href="branching.wiki">Merging, and Tagging &mdash; Branching, Forking,</a></li>
<li><a href="fossil-from-msvc.wiki">Microsoft Express 2010 IDE &mdash; Integrating Fossil in the</a></li>
<li><a href="fiveminutes.wiki">Minutes as a Single User &mdash; Update and Running in 5</a></li>
<li><a href="checkin_names.wiki">Names &mdash; Checkin And Version</a></li>
<li><a href="adding_code.wiki">New Features To Fossil &mdash; Adding</a></li>
<li><a href="newrepo.wiki">New Fossil Repository &mdash; How To Create A</a></li>
<li><a href="foss-cklist.wiki">Open-Source Projects &mdash; Checklist For Successful</a></li>
<li><a href="pop.wiki">Operations &mdash; Principles Of</a></li>
<li><a href="tech_overview.wiki">Overview Of The Design And Implementation Of Fossil &mdash; A Technical</a></li>
<li><a href="index.wiki">Page &mdash; Home</a></li>
<li><a href="customskin.md">Pages &mdash; Theming: Customizing The Appearance of Web</a></li>







|







116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
<li><a href="webui.wiki">Interface &mdash; The Fossil Web</a></li>
<li><a href="th1.md">Language &mdash; The TH1 Scripting</a></li>
<li><a href="copyright-release.html">License Agreement &mdash; Contributor</a></li>
<li><a href="password.wiki">Management And Authentication &mdash; Password</a></li>
<li><a href="branching.wiki">Merging, and Tagging &mdash; Branching, Forking,</a></li>
<li><a href="fossil-from-msvc.wiki">Microsoft Express 2010 IDE &mdash; Integrating Fossil in the</a></li>
<li><a href="fiveminutes.wiki">Minutes as a Single User &mdash; Update and Running in 5</a></li>
<li><a href="checkin_names.wiki">Names &mdash; Check-in And Version</a></li>
<li><a href="adding_code.wiki">New Features To Fossil &mdash; Adding</a></li>
<li><a href="newrepo.wiki">New Fossil Repository &mdash; How To Create A</a></li>
<li><a href="foss-cklist.wiki">Open-Source Projects &mdash; Checklist For Successful</a></li>
<li><a href="pop.wiki">Operations &mdash; Principles Of</a></li>
<li><a href="tech_overview.wiki">Overview Of The Design And Implementation Of Fossil &mdash; A Technical</a></li>
<li><a href="index.wiki">Page &mdash; Home</a></li>
<li><a href="customskin.md">Pages &mdash; Theming: Customizing The Appearance of Web</a></li>
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
<li><a href="tickets.wiki">Ticket System &mdash; The Fossil</a></li>
<li><a href="hints.wiki">Tips And Usage Hints &mdash; Fossil</a></li>
<li><a href="bugtheory.wiki">Tracking In Fossil &mdash; Bug</a></li>
<li><a href="fiveminutes.wiki">Update and Running in 5 Minutes as a Single User</a></li>
<li><a href="hints.wiki">Usage Hints &mdash; Fossil Tips And</a></li>
<li><a href="fiveminutes.wiki">User &mdash; Update and Running in 5 Minutes as a Single</a></li>
<li><a href="ssl.wiki">Using SSL with Fossil</a></li>
<li><a href="checkin_names.wiki">Version Names &mdash; Checkin And</a></li>
<li><a href="fossil-v-git.wiki">Versus Git &mdash; Fossil</a></li>
<li><a href="webui.wiki">Web Interface &mdash; The Fossil</a></li>
<li><a href="customskin.md">Web Pages &mdash; Theming: Customizing The Appearance of</a></li>
<li><a href="quotes.wiki">What People Are Saying About Fossil, Git, and DVCSes in General &mdash; Quotes:</a></li>
<li><a href="wikitheory.wiki">Wiki In Fossil</a></li>
<li><a href="ssl.wiki">with Fossil &mdash; Using SSL</a></li>
</ul></div>







|







183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
<li><a href="tickets.wiki">Ticket System &mdash; The Fossil</a></li>
<li><a href="hints.wiki">Tips And Usage Hints &mdash; Fossil</a></li>
<li><a href="bugtheory.wiki">Tracking In Fossil &mdash; Bug</a></li>
<li><a href="fiveminutes.wiki">Update and Running in 5 Minutes as a Single User</a></li>
<li><a href="hints.wiki">Usage Hints &mdash; Fossil Tips And</a></li>
<li><a href="fiveminutes.wiki">User &mdash; Update and Running in 5 Minutes as a Single</a></li>
<li><a href="ssl.wiki">Using SSL with Fossil</a></li>
<li><a href="checkin_names.wiki">Version Names &mdash; Check-in And</a></li>
<li><a href="fossil-v-git.wiki">Versus Git &mdash; Fossil</a></li>
<li><a href="webui.wiki">Web Interface &mdash; The Fossil</a></li>
<li><a href="customskin.md">Web Pages &mdash; Theming: Customizing The Appearance of</a></li>
<li><a href="quotes.wiki">What People Are Saying About Fossil, Git, and DVCSes in General &mdash; Quotes:</a></li>
<li><a href="wikitheory.wiki">Wiki In Fossil</a></li>
<li><a href="ssl.wiki">with Fossil &mdash; Using SSL</a></li>
</ul></div>

Changes to www/private.wiki.

14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
fossil commit --private
</pre></blockquote>

The --private option causes Fossil to put the check-in in a new branch
named "private".  That branch will not participate in subsequent clone,
sync, push, or pull operations.  The branch will remain on the one local
repository where it was created.  Note that you only use the --private
option for the first checkin that creates the private branch.
Additional checkins into the private branch remain private automatically.

<h2>Publishing Private Changes</h2>

After additional work, one might desire to publish the changes associated
with a private branch.  The usual way to do this is to merge those
changes into a public branch.  For example:







|







14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
fossil commit --private
</pre></blockquote>

The --private option causes Fossil to put the check-in in a new branch
named "private".  That branch will not participate in subsequent clone,
sync, push, or pull operations.  The branch will remain on the one local
repository where it was created.  Note that you only use the --private
option for the first check-in that creates the private branch.
Additional checkins into the private branch remain private automatically.

<h2>Publishing Private Changes</h2>

After additional work, one might desire to publish the changes associated
with a private branch.  The usual way to do this is to merge those
changes into a public branch.  For example:
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
visible to other users of the project.

<h2>Syncing Private Branches</h2>

A private branch normally stays on the one repository where it was
originally created.  But sometimes you want to share private branches
with another repository.  For example, you might be building a cross-platform
application and have separate repositories on your windows laptop, 
your linux desktop, and your iMac.  You can transfer private branches
between these machines by using the --private option on the "sync",
"push", "pull", and "clone" commands.  For example, if you are running
"fossil server" on your linux box and you want to clone that repository
to your Mac, including all private branches, use:

<blockquote><pre>
fossil clone --private http://user@linux.localnetwork:8080/ mac-clone.fossil
</pre></blockquote>

You'll have to supply a username and password in order for this to work.







|
|


|







39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
visible to other users of the project.

<h2>Syncing Private Branches</h2>

A private branch normally stays on the one repository where it was
originally created.  But sometimes you want to share private branches
with another repository.  For example, you might be building a cross-platform
application and have separate repositories on your Windows laptop, 
your Linux desktop, and your iMac.  You can transfer private branches
between these machines by using the --private option on the "sync",
"push", "pull", and "clone" commands.  For example, if you are running
"fossil server" on your Linux box and you want to clone that repository
to your Mac, including all private branches, use:

<blockquote><pre>
fossil clone --private http://user@linux.localnetwork:8080/ mac-clone.fossil
</pre></blockquote>

You'll have to supply a username and password in order for this to work.

Changes to www/quickstart.wiki.

74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
    <p>Note: If you are behind a restrictive firewall, you might need
    to <a href="#proxy">specify an HTTP proxy</a>.</p>

    <p>A Fossil repository is a single disk file.  Instead of cloning,
    you can just make a copy of the repository file (for example, using
    "scp").  Note, however, that the repository file contains auxiliary
    information above and beyond the versioned files, including some
    sensitive information such as password hashs and email addresses.  If you
    want to share Fossil repositories directly, consider running the
    [/help/scrub|fossil scrub] command to remove sensitive information
    before transmitting the file.
    
<h2>Importing From Another Version Control System</h2>

    <p>Rather than start a new project, or clone an existing Fossil project,







|







74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
    <p>Note: If you are behind a restrictive firewall, you might need
    to <a href="#proxy">specify an HTTP proxy</a>.</p>

    <p>A Fossil repository is a single disk file.  Instead of cloning,
    you can just make a copy of the repository file (for example, using
    "scp").  Note, however, that the repository file contains auxiliary
    information above and beyond the versioned files, including some
    sensitive information such as password hashes and email addresses.  If you
    want to share Fossil repositories directly, consider running the
    [/help/scrub|fossil scrub] command to remove sensitive information
    before transmitting the file.
    
<h2>Importing From Another Version Control System</h2>

    <p>Rather than start a new project, or clone an existing Fossil project,
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
    <p>Servers are also easily configured as:
    <ul>
    <li>[./server.wiki#inetd|inetd/xinetd]
    <li>[./server.wiki#cgi|CGI]
    <li>[./server.wiki#scgi|SCGI]
    </ul>

    <p>The the [./selfhost.wiki | self-hosting fossil repositories] use
    CGI.  

<a name="proxy"></a>
<h2>HTTP Proxies</h2>

    <p>If you are behind a restrictive firewall that requires you to use
    an HTTP proxy to reach the internet, then you can configure the proxy







|







334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
    <p>Servers are also easily configured as:
    <ul>
    <li>[./server.wiki#inetd|inetd/xinetd]
    <li>[./server.wiki#cgi|CGI]
    <li>[./server.wiki#scgi|SCGI]
    </ul>

    <p>The [./selfhost.wiki | self-hosting fossil repositories] use
    CGI.  

<a name="proxy"></a>
<h2>HTTP Proxies</h2>

    <p>If you are behind a restrictive firewall that requires you to use
    an HTTP proxy to reach the internet, then you can configure the proxy

Changes to www/quotes.wiki.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<title>What People Are Saying</title>

The following are collected quotes from various forums and blogs about
Fossil, Git, and DVCSes in general.  This collection is put together
by the creator of Fossil, so of course there is selection bias...

<h2>On The Usability Of Git:</h2>

<ol>
<li>Git approaches the useability of iptables, which is to say, utterly 
unusable unless you have the manpage tattooed on you arm.

<blockquote>
<i>by mml at [http://news.ycombinator.com/item?id=1433387]</i>
</blockquote>

<li><nowiki>It's simplest to think of the state of your [git] repository









|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<title>What People Are Saying</title>

The following are collected quotes from various forums and blogs about
Fossil, Git, and DVCSes in general.  This collection is put together
by the creator of Fossil, so of course there is selection bias...

<h2>On The Usability Of Git:</h2>

<ol>
<li>Git approaches the usability of iptables, which is to say, utterly 
unusable unless you have the manpage tattooed on you arm.

<blockquote>
<i>by mml at [http://news.ycombinator.com/item?id=1433387]</i>
</blockquote>

<li><nowiki>It's simplest to think of the state of your [git] repository
106
107
108
109
110
111
112








113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
<li>Fossil is awesome!!! I have never seen an app like that before, 
such simplicity and flexibility!!!

<blockquote>
<i>zengr at [http://stackoverflow.com/questions/138621/best-version-control-for-lone-developer]</i>
</blockquote>










</ol>


<h2>On Git Versus Fossil</h2>

<ol>
<li value=13>
Just want to say thanks for fossil making my life easier.... 
Also <nowiki>[for]</nowiki> not having a misanthropic command line interface.

<blockquote>
<i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i>
</blockquote>








>
>
>
>
>
>
>
>







|







106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
<li>Fossil is awesome!!! I have never seen an app like that before, 
such simplicity and flexibility!!!

<blockquote>
<i>zengr at [http://stackoverflow.com/questions/138621/best-version-control-for-lone-developer]</i>
</blockquote>

<li>This is my favourite VCS. I can carry it on a USB. And it's a complete system, with it's own
server, ticketing system, Wiki pages, and a very, very helpful timeline visualization. And 
the entire program in a single file!

<blockquote>
<i>thunderbong commenting on hacker news: [https://news.ycombinator.com/item?id=9131619]</i>
</blockquote>


</ol>


<h2>On Git Versus Fossil</h2>

<ol>
<li value=14>
Just want to say thanks for fossil making my life easier.... 
Also <nowiki>[for]</nowiki> not having a misanthropic command line interface.

<blockquote>
<i>Joshua Paine at [http://www.mail-archive.com/fossil-users@lists.fossil-scm.org/msg02736.html]</i>
</blockquote>

Changes to www/scgi.wiki.

18
19
20
21
22
23
24
25
26
    scgi_pass localhost:9000;
    scgi_param SCRIPT_NAME "/demo_project";
}
</pre></blockquote>

Note that Nginx does not normally send either the PATH_INFO or SCRIPT_NAME
variables via SCGI, but Fossil needs one or the other.  So the configuration
above needs to add SCRIPT_NAME.  If you do not do this, Fossil return an
error.







|

18
19
20
21
22
23
24
25
26
    scgi_pass localhost:9000;
    scgi_param SCRIPT_NAME "/demo_project";
}
</pre></blockquote>

Note that Nginx does not normally send either the PATH_INFO or SCRIPT_NAME
variables via SCGI, but Fossil needs one or the other.  So the configuration
above needs to add SCRIPT_NAME.  If you do not do this, Fossil returns an
error.

Changes to www/selfcheck.wiki.

67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
files.

<h2>Checksum Over All Files In A Check-in</h2>

Manifest artifacts that define a check-in have two fields (the
R-card and Z-card) that record MD5 hashes of the manifest itself
and of all other files in the manifest.  Prior to any check-in
commit, these checksums are verified to ensure that the checkin
agrees exactly with what is on disk.  Similarly,
the repository checksum is verified after a checkout to make
sure that the entire repository was checked out correctly.
Note that these added checks use a different hash (MD5 instead
of SHA1) in order to avoid common-mode failures in the hash
algorithm implementation.








|







67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
files.

<h2>Checksum Over All Files In A Check-in</h2>

Manifest artifacts that define a check-in have two fields (the
R-card and Z-card) that record MD5 hashes of the manifest itself
and of all other files in the manifest.  Prior to any check-in
commit, these checksums are verified to ensure that the check-in
agrees exactly with what is on disk.  Similarly,
the repository checksum is verified after a checkout to make
sure that the entire repository was checked out correctly.
Note that these added checks use a different hash (MD5 instead
of SHA1) in order to avoid common-mode failures in the hash
algorithm implementation.

Changes to www/server.wiki.

51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73













74
75
76
77
78
79
80
their behavior.  See the [/help/server|online documentation] for an overview.
</p>
</blockquote>
<a name="inetd"></a>
<h2>Fossil as an inetd/xinetd or stunnel service</h2><blockquote>
<p>
A Fossil server can be launched on-demand by inetd or xinetd using
the [/help/http|fossil http] command.  To launch Fossil from inetd, modify
your inetd configuration file (typically "/etc/inetd.conf") to contain a
line something like this:
<blockquote>
<pre>
12345 stream tcp nowait.1000 root /usr/bin/fossil /usr/bin/fossil http /home/fossil/repo.fossil
</pre>
</blockquote>
In this example, you are telling "inetd" that when an incoming connection
appears on port "12345", that it should launch the binary "/usr/bin/fossil"
program with the arguments shown.
Obviously you will
need to modify the pathnames for your particular setup.
The final argument is either the name of the fossil repository to be served,
or a directory containing multiple repositories.
</p>













<p>
If your system is running xinetd, then the configuration is likely to be
in the file "/etc/xinetd.conf" or in a subfile of "/etc/xinetd.d".
An xinetd configuration file will appear like this:</p>
<blockquote>
<pre>
service http-alt







|















>
>
>
>
>
>
>
>
>
>
>
>
>







51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
their behavior.  See the [/help/server|online documentation] for an overview.
</p>
</blockquote>
<a name="inetd"></a>
<h2>Fossil as an inetd/xinetd or stunnel service</h2><blockquote>
<p>
A Fossil server can be launched on-demand by inetd or xinetd using
the [/help/http|fossil http] command. To launch Fossil from inetd, modify
your inetd configuration file (typically "/etc/inetd.conf") to contain a
line something like this:
<blockquote>
<pre>
12345 stream tcp nowait.1000 root /usr/bin/fossil /usr/bin/fossil http /home/fossil/repo.fossil
</pre>
</blockquote>
In this example, you are telling "inetd" that when an incoming connection
appears on port "12345", that it should launch the binary "/usr/bin/fossil"
program with the arguments shown.
Obviously you will
need to modify the pathnames for your particular setup.
The final argument is either the name of the fossil repository to be served,
or a directory containing multiple repositories.
</p>
<p>
For systems where the port-specification must be a symbolic name and cannot be 
numeric, add the desired name and port to /etc/services, e.g.:
<blockquote>
<pre>
fossil          12345/tcp  #fossil server
</pre>
</blockquote>
and use the symbolic name ('fossil' in this example) instead of the numeral ('12345') 
in inetd.conf. For details, see the relevant section in your system's documentation, e.g. 
the [https://www.freebsd.org/doc/en/books/handbook/network-inetd.html|FreeBSD Handbook] in 
case you use FreeBSD.
</p>
<p>
If your system is running xinetd, then the configuration is likely to be
in the file "/etc/xinetd.conf" or in a subfile of "/etc/xinetd.d".
An xinetd configuration file will appear like this:</p>
<blockquote>
<pre>
service http-alt
95
96
97
98
99
100
101




102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
<p>
In both cases notice that Fossil was launched as root.  This is not required,
but if it is done, then Fossil will automatically put itself into a chroot
jail for the user who owns the fossil repository before reading any information
off of the wire.
</p>
<p>




[http://www.stunnel.org/ | Stunnel version 4] is an inetd-like process that
accepts and decodes SSL-encrypted connections.  Fossil can be run directly from
stunnel in a mannar similar to inetd and xinetd.  This can be used to provide
a secure link to a Fossil project.  The configuration needed to get stunnel4
to invoke Fossil is very similar to the inetd and xinetd examples shown above.
The relevant parts of an stunnel configuration might look something
like the following:
<blockquote><pre><nowiki>
[https]
accept       = www.ubercool-project.org:443
TIMEOUTclose = 0
exec         = /usr/bin/fossil
execargs     = /usr/bin/fossil http /home/fossil/ubercool.fossil --https
</nowiki></pre></blockquote>
See the stunnel4 documentation for further details bout the /etc/stunnel/stunnel.conf
configuration file.  Note that the [/help/http|fossil http] command should include 
the --https option to let Fossil know to use "https" instead of "http" as the scheme
on generated hyperlinks.
<p>
Using inetd or xinetd or stunnel is a more complex setup 
than the "standalone" server, but it has the
advantage of only using system resources when an actual connection is







>
>
>
>
|

|
|










|







108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
<p>
In both cases notice that Fossil was launched as root.  This is not required,
but if it is done, then Fossil will automatically put itself into a chroot
jail for the user who owns the fossil repository before reading any information
off of the wire.
</p>
<p>
Inetd or xinetd must be enabled, and must be (re)started whenever their configuration
changes - consult your system's documentation for details. 
</p>
<p>
[https://www.stunnel.org/ | Stunnel version 5] is an inetd-like process that
accepts and decodes SSL-encrypted connections.  Fossil can be run directly from
stunnel in a manner similar to inetd and xinetd.  This can be used to provide
a secure link to a Fossil project.  The configuration needed to get stunnel5
to invoke Fossil is very similar to the inetd and xinetd examples shown above.
The relevant parts of an stunnel configuration might look something
like the following:
<blockquote><pre><nowiki>
[https]
accept       = www.ubercool-project.org:443
TIMEOUTclose = 0
exec         = /usr/bin/fossil
execargs     = /usr/bin/fossil http /home/fossil/ubercool.fossil --https
</nowiki></pre></blockquote>
See the stunnel5 documentation for further details about the /etc/stunnel/stunnel.conf
configuration file.  Note that the [/help/http|fossil http] command should include 
the --https option to let Fossil know to use "https" instead of "http" as the scheme
on generated hyperlinks.
<p>
Using inetd or xinetd or stunnel is a more complex setup 
than the "standalone" server, but it has the
advantage of only using system resources when an actual connection is
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
as [http://www.sqlite.org] and [http://system.data.sqlite.org]) and
it has a typical load of 0.05 to 0.1.  A single HTTP request to Fossil
normally takes less than 10 milliseconds of CPU time to complete.  So
requests can be arriving at a continuous rate of 20 or more per second 
and the CPU can still be mostly idle.
<p>
However, there are some Fossil web pages that can consume large 
amounts of CPU time, expecially on repositories with a large number
of files or with long revision histories.  High CPU usage pages include
[/help?cmd=/zip | /zip], [/help?cmd=/tarball | /tarball],
[/help?cmd=/annotate | /annotate] and others.  On very large repositories,
these commands can take 15 seconds or more of CPU time.  
If these kinds of requests arrive too quickly, the load average on the
server can grow dramatically, making the server unresponsive.
<p>







|







281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
as [http://www.sqlite.org] and [http://system.data.sqlite.org]) and
it has a typical load of 0.05 to 0.1.  A single HTTP request to Fossil
normally takes less than 10 milliseconds of CPU time to complete.  So
requests can be arriving at a continuous rate of 20 or more per second 
and the CPU can still be mostly idle.
<p>
However, there are some Fossil web pages that can consume large 
amounts of CPU time, especially on repositories with a large number
of files or with long revision histories.  High CPU usage pages include
[/help?cmd=/zip | /zip], [/help?cmd=/tarball | /tarball],
[/help?cmd=/annotate | /annotate] and others.  On very large repositories,
these commands can take 15 seconds or more of CPU time.  
If these kinds of requests arrive too quickly, the load average on the
server can grow dramatically, making the server unresponsive.
<p>
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
fossil set max-loadavg 1.5
fossil all set max-loadavg 1.5
</pre></blockquote>
The second form is especially useful for changing the maximum load average
simultaneously on a large number of repositories.
<p>
Note that this load-average limiting feature is only available on operating
systems that support the "getloadavg()" API.  Most modern unix systems have
this interface, but Windows does not, so the feature will not work on Windows.
Note also that Linux implements "getloadavg()" by accessing the "/proc/loadavg"
file in the "proc" virtual filesystem.  If you are running a Fossil instance
inside a chroot() jail on Linux, you will need to make the "/proc" file
system available inside that jail in order for this feature to work.  On
the self-hosting Fossil repository, this was accomplished by adding a line
to the "/etc/mtab" file that looks like:







|







328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
fossil set max-loadavg 1.5
fossil all set max-loadavg 1.5
</pre></blockquote>
The second form is especially useful for changing the maximum load average
simultaneously on a large number of repositories.
<p>
Note that this load-average limiting feature is only available on operating
systems that support the "getloadavg()" API.  Most modern Unix systems have
this interface, but Windows does not, so the feature will not work on Windows.
Note also that Linux implements "getloadavg()" by accessing the "/proc/loadavg"
file in the "proc" virtual filesystem.  If you are running a Fossil instance
inside a chroot() jail on Linux, you will need to make the "/proc" file
system available inside that jail in order for this feature to work.  On
the self-hosting Fossil repository, this was accomplished by adding a line
to the "/etc/mtab" file that looks like:

Changes to www/stats.wiki.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
<title>Fossil Performance</title>
<h1 align="center">Performance Statistics</h1>

The questions will inevitably arise:  How does Fossil perform? 
Does it use a lot of disk space or bandwidth?  Is it scalable?

In an attempt to answers these questions, this report looks at several
projects that use fossil for configuration management and examines how
well they are working.  The following table is a summary of the results.
(Last updated on 2012-02-26.)
Explanation and analysis follows the table.

<table border=1>
<tr>
<th>Project</th>
<th>Number Of Artifacts</th>
<th>Number Of Check-ins</th>
<th>Project&nbsp;Duration<br>(as of 2009-08-23)</th>
<th>Average Check-ins Per Day</th>
<th>Uncompressed Size</th>
<th>Repository Size</th>
<th>Compression Ratio</th>
<th>Clone Bandwidth</th>
</tr>

<tr align="center">
<td>[http://www.sqlite.org/src/timeline | SQLite]
<td>41113
<td>9943
<td>4290&nbsp;days<br>11.75&nbsp;yrs
<td>2.32
<td>2.09&nbsp;GB
<td>33.2&nbsp;MB
<td>63:1
<td>23.2&nbsp;MB
</tr>

<tr align="center">
<td>[http://core.tcl.tk/tcl/timeline | TCL]
<td>74806
<td>13541
<td>5085&nbsp;days<br>13.92&nbsp;yrs
<td>2.66
<td>5.2&nbsp;GB
<td>86&nbsp;MB
<td>60:1
<td>67.0&nbsp;MB
</tr>

<tr align="center">
<td>[/timeline | Fossil]
<td>15561
<td>3764
<td>1681&nbsp;days<br>4.6&nbsp;yrs
<td>2.24
<td>721&nbsp;MB
<td>18.8&nbsp;MB
<td>38:1
<td>12.0&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/slt/timeline | SLT]
<td>2174
<td>100
<td>1183&nbsp;days<br>3.24&nbsp;yrs
<td>0.08
<td>1.94&nbsp;GB
<td>143&nbsp;MB
<td>12:1
<td>141&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/th3.html | TH3]
<td>5624
<td>1472
<td>1248&nbsp;days<br>3.42&nbsp;yrs
<td>1.78
<td>252&nbsp;MB
<td>12.5&nbsp;MB
<td>20:1
<td>12.2&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/docsrc/timeline | SQLite Docs]
<td>3664
<td>1003
<td>1567&nbsp;days<br>4.29&nbsp;yrs
<td>0.64
<td>108&nbsp;MB
<td>6.6&nbsp;MB
<td>16:1
<td>5.71&nbsp;MB
</tr>

</table>

<h2>Measured Attributes</h2>

In Fossil, every version of every file, every wiki page, every change to









|







|
<








|
|
|
<
|
|
|
|




|
|
|
<
|
|
|
|




|
|
|
|
|
<
|
|




|
|
|
<
|
|
|
|




|
|
|
<
|
|
|
|




|
|
|
<
|
|
|
|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

19
20
21
22
23
24
25
26
27
28
29

30
31
32
33
34
35
36
37
38
39
40

41
42
43
44
45
46
47
48
49
50
51
52
53

54
55
56
57
58
59
60
61
62

63
64
65
66
67
68
69
70
71
72
73

74
75
76
77
78
79
80
81
82
83
84

85
86
87
88
89
90
91
92
93
94
95
<title>Fossil Performance</title>
<h1 align="center">Performance Statistics</h1>

The questions will inevitably arise:  How does Fossil perform? 
Does it use a lot of disk space or bandwidth?  Is it scalable?

In an attempt to answers these questions, this report looks at several
projects that use fossil for configuration management and examines how
well they are working.  The following table is a summary of the results.
(Last updated on 2015-02-28.)
Explanation and analysis follows the table.

<table border=1>
<tr>
<th>Project</th>
<th>Number Of Artifacts</th>
<th>Number Of Check-ins</th>
<th>Project&nbsp;Duration<br>(as of 2015-02-28)</th>

<th>Uncompressed Size</th>
<th>Repository Size</th>
<th>Compression Ratio</th>
<th>Clone Bandwidth</th>
</tr>

<tr align="center">
<td>[http://www.sqlite.org/src/timeline | SQLite]
<td>56089
<td>14357
<td>5388&nbsp;days<br>14.75&nbsp;years

<td>3.4&nbsp;GB
<td>45.5&nbsp;MB
<td>73:1
<td>29.9&nbsp;MB
</tr>

<tr align="center">
<td>[http://core.tcl.tk/tcl/timeline | TCL]
<td>139662
<td>18125
<td>6183&nbsp;days<br>16.93&nbsp;years

<td>6.6&nbsp;GB
<td>192.6&nbsp;MB
<td>34:1
<td>117.1&nbsp;MB
</tr>

<tr align="center">
<td>[/timeline | Fossil]
<td>29937
<td>8238
<td>2779&nbsp;days<br>7.61&nbsp;years
<td>2.3&nbsp;GB
<td>34.0&nbsp;MB

<td>67:1
<td>21.5&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/slt/timeline | SLT]
<td>2278
<td>136
<td>2282&nbsp;days<br>6.25&nbsp;years

<td>2.0&nbsp;GB
<td>144.7&nbsp;MB
<td>13:1
<td>142.1&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/th3.html | TH3]
<td>8729
<td>2406
<td>2347&nbsp;days<br>6.43&nbsp;years

<td>386&nbsp;MB
<td>15.2&nbsp;MB
<td>25:1
<td>12.6&nbsp;MB
</tr>

<tr align="center">
<td>[http://www.sqlite.org/docsrc/timeline | SQLite Docs]
<td>5503
<td>1631
<td>2666&nbsp;days<br>7.30&nbsp;years

<td>194&nbsp;MB
<td>10.9&nbsp;MB
<td>17:1
<td>8.37&nbsp;MB
</tr>

</table>

<h2>Measured Attributes</h2>

In Fossil, every version of every file, every wiki page, every change to
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157


158
159
changed, it still only counts as one check-in.

The "Uncompressed Size" is the total size of all the artifacts within
the repository assuming they were all uncompressed and stored 
separately on the disk.  Fossil makes use of delta compression between related
versions of the same file, and then uses zlib compression on the resulting
deltas.  The total resulting repository size is shown after the uncompressed
size.  For this chart, "fossil rebuild --compress" was run on each repository
prior to measuring its compressed size.  Repository sizes would typically 
be 20% larger without that rebuild.

On the right end of the table, we show the "Clone Bandwidth".  This is the
total number of bytes sent from server back to the client.  The number of
bytes sent from client to server is neglible in comparison.
These byte counts include HTTP protocol overhead.

In the table and throughout this article,
"GB" means gigabytes (10<sup><small>9</small></sup> bytes)
not <a href="http://en.wikipedia.org/wiki/Gibibyte">gibibytes</a>
(2<sup><small>30</small></sup> bytes).  Similarly, "MB" and "KB"
means megabytes and kilobytes, not mebibytes and kibibytes.

<h2>Analysis And Supplimental Data</h2>

Perhaps the two most interesting datapoints in the above table are SQLite
and SLT.  SQLite is a long-running project with long revision chains.
Some of the files in SQLite have been edited over a thousand times.
Each of these edits is stored as a delta, and hence the SQLite project
gets excellent 63:1 compression.  SLT, on the other hand, consists of
many large (megabyte-sized) SQL scripts that have one or maybe two
edits each.  There is very little delta compression occurring and so the
overall repository compression ratio is much lower.  Note also that
quite a bit more bandwidth is required to clone SLT than SQLite.

For the first nine years of its development, SQLite was versioned by CVS.
The resulting CVS repository measured over 320MB in size.  So, the
developers were surprised to see that this entire project could be cloned in
fossil using only about 23.2MB of network traffic.  (This 23.2MB includes
all the changes to SQLite that have been made since the conversion from
CVS.  Of those changes are omitted, the clone bandwidth drops to 13MB.)
The "sync" protocol
used by fossil has turned out to be surprisingly efficient.  A typical
check-in on SQLite might use 3 or 4KB of network bandwidth total.  Hardly


worth measuring.  The sync protocol is efficient enough that, once cloned,
Fossil could easily be used over a dial-up connection.







<
|
<



|








|





|







|
<
<
|


|
>
>
|
|
108
109
110
111
112
113
114

115

116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142


143
144
145
146
147
148
149
150
changed, it still only counts as one check-in.

The "Uncompressed Size" is the total size of all the artifacts within
the repository assuming they were all uncompressed and stored 
separately on the disk.  Fossil makes use of delta compression between related
versions of the same file, and then uses zlib compression on the resulting
deltas.  The total resulting repository size is shown after the uncompressed

size.


On the right end of the table, we show the "Clone Bandwidth".  This is the
total number of bytes sent from server back to the client.  The number of
bytes sent from client to server is negligible in comparison.
These byte counts include HTTP protocol overhead.

In the table and throughout this article,
"GB" means gigabytes (10<sup><small>9</small></sup> bytes)
not <a href="http://en.wikipedia.org/wiki/Gibibyte">gibibytes</a>
(2<sup><small>30</small></sup> bytes).  Similarly, "MB" and "KB"
means megabytes and kilobytes, not mebibytes and kibibytes.

<h2>Analysis And Supplemental Data</h2>

Perhaps the two most interesting datapoints in the above table are SQLite
and SLT.  SQLite is a long-running project with long revision chains.
Some of the files in SQLite have been edited over a thousand times.
Each of these edits is stored as a delta, and hence the SQLite project
gets excellent 73:1 compression.  SLT, on the other hand, consists of
many large (megabyte-sized) SQL scripts that have one or maybe two
edits each.  There is very little delta compression occurring and so the
overall repository compression ratio is much lower.  Note also that
quite a bit more bandwidth is required to clone SLT than SQLite.

For the first nine years of its development, SQLite was versioned by CVS.
The resulting CVS repository measured over 320MB in size.  So, the
developers were surprised to see that the equivalent Fossil project (the


first nine years on SQLite) would clone with only 13MB of bandwidth.
The "sync" protocol
used by fossil has turned out to be surprisingly efficient.  A typical
check-in on SQLite might use 3 or 4KB of network bandwidth.
For example, the [04eef9522386a59e] check-in used a single HTTP request
of 2099 bytes and got back a reply of 1116 bytes.
The sync protocol is efficient enough that, once cloned,
Fossil can easily be used over a dial-up connection.

Changes to www/style.wiki.

21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
       contain no profanity, obscenity, or innuendo.

  16.  All C-code conforms to ANSI C-89.

  17.  All comments and identifiers are in English.

  18.  The program is single-threaded.  Do not use threads.
       (One exception to this is the HTTP server implementation for windows,
       which we do not know how to implement without the use of threads.)


<b>C preprocessor macros</b>:

  20.  The purpose of every preprocessor macros is clearly explained in a
       comment associated with its definition.







|







21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
       contain no profanity, obscenity, or innuendo.

  16.  All C-code conforms to ANSI C-89.

  17.  All comments and identifiers are in English.

  18.  The program is single-threaded.  Do not use threads.
       (One exception to this is the HTTP server implementation for Windows,
       which we do not know how to implement without the use of threads.)


<b>C preprocessor macros</b>:

  20.  The purpose of every preprocessor macros is clearly explained in a
       comment associated with its definition.
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58

<b>Function header comments</b>:

  30.  Every function has a header comment describing the purpose and use
       of the function.

  31.  Function header comment defines the behavior of the function in
       sufficient detail to allow the function to be reimplemented from
       scratch without reference to the original code.

  32.  Functions that perform dynamic memory allocation (either directly
       or indirectly via subfunctions) say so in their header comments.


<b>Function bodies</b>:







|







44
45
46
47
48
49
50
51
52
53
54
55
56
57
58

<b>Function header comments</b>:

  30.  Every function has a header comment describing the purpose and use
       of the function.

  31.  Function header comment defines the behavior of the function in
       sufficient detail to allow the function to be re-implemented from
       scratch without reference to the original code.

  32.  Functions that perform dynamic memory allocation (either directly
       or indirectly via subfunctions) say so in their header comments.


<b>Function bodies</b>:

Changes to www/tech_overview.wiki.

122
123
124
125
126
127
128
129
130
131
132
133
134



135
136
137
138
139
140
141
a way to change settings for all repositories with a single command, rather
than having to change the setting individually on each repository.

The configuration database also maintains a list of repositories.  This
list is used by the [/help/all | fossil all] command in order to run various
operations such as "sync" or "rebuild" on all repositories managed by a user.

On unix systems, the configuration database is named ".fossil" and is
located in the user's home directory.  On windows, the configuration
database is named "_fossil" (using an underscore as the first character
instead of a dot) and is located in the directory specified by the
LOCALAPPDATA, APPDATA, or HOMEPATH environment variables, in that order.




<h3>2.2 Repository Databases</h3>

The repository database is the file that is commonly referred to as 
"the repository".  This is because the repository database contains,
among other things, the complete revision, ticket, and wiki history for
a project.  It is customary to name the repository database after then
name of the project, with a ".fossil" suffix.  For example, the repository







|
|




>
>
>







122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
a way to change settings for all repositories with a single command, rather
than having to change the setting individually on each repository.

The configuration database also maintains a list of repositories.  This
list is used by the [/help/all | fossil all] command in order to run various
operations such as "sync" or "rebuild" on all repositories managed by a user.

On Unix systems, the configuration database is named ".fossil" and is
located in the user's home directory.  On Windows, the configuration
database is named "_fossil" (using an underscore as the first character
instead of a dot) and is located in the directory specified by the
LOCALAPPDATA, APPDATA, or HOMEPATH environment variables, in that order.

You can override this default location by defining the environment
variable FOSSIL_HOME pointing to an appropriate (writable) directory.

<h3>2.2 Repository Databases</h3>

The repository database is the file that is commonly referred to as 
"the repository".  This is because the repository database contains,
among other things, the complete revision, ticket, and wiki history for
a project.  It is customary to name the repository database after then
name of the project, with a ".fossil" suffix.  For example, the repository
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207

<h4>2.2.2 Project Metadata</h4>

The global project state information in the repository database is
supplemented by computed metadata that makes querying the project state
more efficient.  Metadata includes information such as the following:

  *  The names for all files found in any checkin.
  *  All check-ins that modify a given file
  *  Parents and children of each checkin.
  *  Potential timeline rows.
  *  The names of all symbolic tags and the checkins they apply to.
  *  The names of all wiki pages and the artifacts that comprise each
     wiki page.
  *  Attachments and the wiki pages or tickets they apply to.
  *  Current content of each ticket.
  *  Cross-references between tickets, checkins, and wiki pages.

The metadata is held in various SQL tables in the repository database.
The metadata is designed to facilitate queries for the various timelines and
reports that Fossil generates.
As the functionality of Fossil evolves,
the schema for the metadata can and does change.
But schema changes do no invalidate the repository.  Remember that the







|

|

|




|







187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210

<h4>2.2.2 Project Metadata</h4>

The global project state information in the repository database is
supplemented by computed metadata that makes querying the project state
more efficient.  Metadata includes information such as the following:

  *  The names for all files found in any check-in.
  *  All check-ins that modify a given file
  *  Parents and children of each check-in.
  *  Potential timeline rows.
  *  The names of all symbolic tags and the check-ins they apply to.
  *  The names of all wiki pages and the artifacts that comprise each
     wiki page.
  *  Attachments and the wiki pages or tickets they apply to.
  *  Current content of each ticket.
  *  Cross-references between tickets, check-ins, and wiki pages.

The metadata is held in various SQL tables in the repository database.
The metadata is designed to facilitate queries for the various timelines and
reports that Fossil generates.
As the functionality of Fossil evolves,
the schema for the metadata can and does change.
But schema changes do no invalidate the repository.  Remember that the
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
  *  The name of the repository database file.
  *  The version that is currently checked out.
  *  Files that have been [/help/add | added],
     [/help/rm | removed], or [/help/mv | renamed] but not
     yet committed.
  *  The mtime and size of files as they were originally checked out,
     in order to expedite checking which files have been edited.
  *  Other checkins that have been [/help/merge | merged] into the
     working checkout but not yet committed.
  *  Copies of files prior to the most recent undoable operation - needed to
     implement the [/help/undo | undo] and [/help/redo | redo] commands.
  *  The [/help/stash | stash].
  *  State information for the [/help/bisect | bisect] command.

For Fossil commands that run from within a working checkout, the







|







300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
  *  The name of the repository database file.
  *  The version that is currently checked out.
  *  Files that have been [/help/add | added],
     [/help/rm | removed], or [/help/mv | renamed] but not
     yet committed.
  *  The mtime and size of files as they were originally checked out,
     in order to expedite checking which files have been edited.
  *  Other check-ins that have been [/help/merge | merged] into the
     working checkout but not yet committed.
  *  Copies of files prior to the most recent undoable operation - needed to
     implement the [/help/undo | undo] and [/help/redo | redo] commands.
  *  The [/help/stash | stash].
  *  State information for the [/help/bisect | bisect] command.

For Fossil commands that run from within a working checkout, the

Changes to www/th1.md.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
TH1 Scripts
===========

TH1 is a very small scripting language used to help generate web-page
content in Fossil.

Origins
-------

TH1 began as a minimalist reimplementation of the TCL scripting language.
There was a need to test the SQLite library on Symbian phones, but at that
time all of the test cases for SQLite were written in Tcl and Tcl could not
be easily compiled on the SymbianOS.  So TH1 was developed as a cut-down
version of TCL that would facilitate running the SQLite test scripts on
SymbianOS.

The testing of SQLite on SymbianOS was eventually accomplished by other









|







1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
TH1 Scripts
===========

TH1 is a very small scripting language used to help generate web-page
content in Fossil.

Origins
-------

TH1 began as a minimalist re-implementation of the TCL scripting language.
There was a need to test the SQLite library on Symbian phones, but at that
time all of the test cases for SQLite were written in Tcl and Tcl could not
be easily compiled on the SymbianOS.  So TH1 was developed as a cut-down
version of TCL that would facilitate running the SQLite test scripts on
SymbianOS.

The testing of SQLite on SymbianOS was eventually accomplished by other
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
So, you see, even though the example above spans five lines, it is really
just a single command.

Summary of Core TH1 Commands
----------------------------

The original TCL language after when TH1 is modeled has a very rich
repetoire of commands.  TH1, as it is designed to be minimalist and
embedded has a greatly reduced command set.  The following bullets
summarize the commands available in TH1:

  *  break
  *  catch SCRIPT ?VARIABLE?
  *  continue
  *  error ?STRING?







|







77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
So, you see, even though the example above spans five lines, it is really
just a single command.

Summary of Core TH1 Commands
----------------------------

The original TCL language after when TH1 is modeled has a very rich
repertoire of commands.  TH1, as it is designed to be minimalist and
embedded has a greatly reduced command set.  The following bullets
summarize the commands available in TH1:

  *  break
  *  catch SCRIPT ?VARIABLE?
  *  continue
  *  error ?STRING?
155
156
157
158
159
160
161
162
163
164
165
166
  *  utime
  *  wiki

Each of the commands above is documented by a block comment above their
implementation in the th_main.c source file.

**To Do:** We would like to have a community volunteer go through and
copy the documentation for each of these command (with appropriate
format changes and spelling and grammar corrections) into subsequent
sections of this document. It is suggested that the list of extension
commands be left intact - as a quick reference.  But it would be really
nice to also have the details of each each command does.







|



|
155
156
157
158
159
160
161
162
163
164
165
166
  *  utime
  *  wiki

Each of the commands above is documented by a block comment above their
implementation in the th_main.c source file.

**To Do:** We would like to have a community volunteer go through and
copy the documentation for each of these commands (with appropriate
format changes and spelling and grammar corrections) into subsequent
sections of this document. It is suggested that the list of extension
commands be left intact - as a quick reference.  But it would be really
nice to also have the details of what each command does.

Changes to www/webpage-ex.md.

69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
     href='../../../tree?ci=trunk&type=tree&mtime=1'>Example</a>
     All files for the latest check-in on a branch (trunk) sorted by
     last modification time.

  *  <a target='_blank' class='exbtn'
     href='../../../fileage?name=svn-import'>Example</a>
     Age of all files in the latest checking for branch "svn-import".
     last modification time.

  *  <a target='_blank' class='exbtn'
     href='../../../brlist'>Example</a>
     Table of branches.  (Click on column headers to sort.)

  *  <a target='_blank' class='exbtn'
     href='../../../stat'>Example</a>







<







69
70
71
72
73
74
75

76
77
78
79
80
81
82
     href='../../../tree?ci=trunk&type=tree&mtime=1'>Example</a>
     All files for the latest check-in on a branch (trunk) sorted by
     last modification time.

  *  <a target='_blank' class='exbtn'
     href='../../../fileage?name=svn-import'>Example</a>
     Age of all files in the latest checking for branch "svn-import".


  *  <a target='_blank' class='exbtn'
     href='../../../brlist'>Example</a>
     Table of branches.  (Click on column headers to sort.)

  *  <a target='_blank' class='exbtn'
     href='../../../stat'>Example</a>

Changes to www/webui.wiki.

10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
  *  Status information
  *  Timelines
  *  Graphs of revision and branching history
  *  [./event.wiki | Technical notes]
  *  File and version lists and differences
  *  Download historical versions as ZIP archives
  *  Historical change data
  *  Add and remove tags on checkins
  *  Move checkins between branches
  *  Revise checkin comments
  *  Manage user credentials and access permissions
  *  And so forth... (some [./webpage-ex.md|examples])

You get all of this, and more, for free when you use Fossil. 
There are no extra programs to install or setup.
Everything you need is already pre-configured and built into the
self-contained, stand-alone Fossil executable.







|
|
|







10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
  *  Status information
  *  Timelines
  *  Graphs of revision and branching history
  *  [./event.wiki | Technical notes]
  *  File and version lists and differences
  *  Download historical versions as ZIP archives
  *  Historical change data
  *  Add and remove tags on check-ins
  *  Move check-ins between branches
  *  Revise check-in comments
  *  Manage user credentials and access permissions
  *  And so forth... (some [./webpage-ex.md|examples])

You get all of this, and more, for free when you use Fossil. 
There are no extra programs to install or setup.
Everything you need is already pre-configured and built into the
self-contained, stand-alone Fossil executable.
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
menu bar.  Fossil uses simple and easy-to-remember
[/wiki_rules | wiki formatting rules] so you won't have to spend a lot
of time learning a new markup language.  And, as with tickets, all of
your edits will automatically merge with those of your co-workers when
your repository synchronizes.

You can view summary reports of <b>branches</b> in the
check-in graph by visiting the "Branche" links on the
menu bar.  From those pages you can follow hyperlinks to get additional
details.  These screens allow you to easily keep track of what is going
on with separate subteams within your project team.

The "Files" link on the menu allows you to browse though the <b>file
hierarchy</b> of the project and to view complete changes histories on
individual files, with hyperlinks to the check-ins that made those
changes, and with diffs and annotated diffs between versions.

The web interface supports [./embeddeddoc.wiki | embedded documentation].
Embedded documentation is documentation files (usually in wiki format)
that are checked into project as part of the source tree.  Such files







|




|







102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
menu bar.  Fossil uses simple and easy-to-remember
[/wiki_rules | wiki formatting rules] so you won't have to spend a lot
of time learning a new markup language.  And, as with tickets, all of
your edits will automatically merge with those of your co-workers when
your repository synchronizes.

You can view summary reports of <b>branches</b> in the
check-in graph by visiting the "Branches" link on the
menu bar.  From those pages you can follow hyperlinks to get additional
details.  These screens allow you to easily keep track of what is going
on with separate subteams within your project team.

The "Files" link on the menu allows you to browse through the <b>file
hierarchy</b> of the project and to view complete changes histories on
individual files, with hyperlinks to the check-ins that made those
changes, and with diffs and annotated diffs between versions.

The web interface supports [./embeddeddoc.wiki | embedded documentation].
Embedded documentation is documentation files (usually in wiki format)
that are checked into project as part of the source tree.  Such files