Feat(*): Project init.

This commit is contained in:
fshee 2021-04-30 02:28:21 +00:00
commit 054feab897
138 changed files with 263043 additions and 0 deletions

1
.gitignore vendored Normal file
View file

@ -0,0 +1 @@
out

661
LICENSE Normal file
View file

@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
A URL Shortener called Link.
Copyright (C) 2021 i@fsh.ee
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

1
README Normal file
View file

@ -0,0 +1 @@
https://fsh.ee/

8
go.mod Normal file
View file

@ -0,0 +1,8 @@
module git.fsh.ee/i/link
go 1.16
require (
gorm.io/driver/sqlite v1.1.4
gorm.io/gorm v1.21.9
)

12
go.sum Normal file
View file

@ -0,0 +1,12 @@
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/jinzhu/now v1.1.2 h1:eVKgfIdy9b6zbWBMgFpfDPoAMifwSZagU9HmEU6zgiI=
github.com/jinzhu/now v1.1.2/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/mattn/go-sqlite3 v1.14.5 h1:1IdxlwTNazvbKJQSxoJ5/9ECbEeaTTyeU7sEAZ5KKTQ=
github.com/mattn/go-sqlite3 v1.14.5/go.mod h1:WVKg1VTActs4Qso6iwGbiFih2UIHo0ENGwNd0Lj+XmI=
gorm.io/driver/sqlite v1.1.4 h1:PDzwYE+sI6De2+mxAneV9Xs11+ZyKV6oxD3wDGkaNvM=
gorm.io/driver/sqlite v1.1.4/go.mod h1:mJCeTFr7+crvS+TRnWc5Z3UvwxUN1BGBLMrf5LA9DYw=
gorm.io/gorm v1.20.7/go.mod h1:0HFTzE/SqkGTzK6TlDPPQbAYCluiVvhzoA1+aVyzenw=
gorm.io/gorm v1.21.9 h1:INieZtn4P2Pw6xPJ8MzT0G4WUOsHq3RhfuDF1M6GW0E=
gorm.io/gorm v1.21.9/go.mod h1:F+OptMscr0P2F2qU97WT1WimdH9GaQPoDW7AYd5i2Y0=

81
index.html Normal file
View file

@ -0,0 +1,81 @@
<!DOCTYPE html>
<html lang=en>
<head>
<title>A Minimal, SQLite-Backed URL Shortener</title>
<meta charset='utf-8'>
<meta http-equiv='X-UA-Compatible' content='IE=edge,chrome=1'>
<meta http-equiv='Content-Type' content='text/html; charset=utf-8'>
<meta name='viewport' content='width=device-width, initial-scale=1'>
<meta content='A minimal, SQLite backed URL shortener. Made with Go, Vim, and FreeBSD.' name='description'>
</head>
<body style='font-family: monospace; max-width: 80ch;'>
<header>
A Minimal, SQLite-Backed URL Shortener
</header>
<style>
@media (max-width: 1000px) {
pre code {
display: block;
max-width: 100%%;
overflow-x: auto;
-webkit-overflow-scrolling: touch;
padding: 0 5px 5px 0;
}
}
</style>
<pre><code>| Examples:
|
| 1. Create a short link to https://duckduckgo.com
| $ curl -d https://duckduckgo.com %s
| %s/502fb5543c36014f
|
| 2. Deleting a short link
| $ TMP=$(mktemp)
| $ # temp file will store header
| $ LINK=$(curl -sS %s -d https://duckduckgo.com -D $TMP)
| $ # the link has been successfully created
| $ DEL=$(cat $TMP | grep -i delete-with | awk '{print$2}'| tr -d '\r')
| $ # deletion key is stored in 'X-Delete-With' header.
| $ curl $LINK
| &lt;a href=&quot;https://duckduckgo.com&quot;&gt;Permanent Redirect&lt;/a&gt;.
| $ # the link is working as expected
| $ curl $LINK -X DELETE -d $DEL
| $ curl $LINK
| record not found
| $ # the link has been successfully deleted.</code></pre>
<p>
Please note: this is an example deployment. If you attempt to create a short
link here you will receive a 401 Unauthorized. If you like the examples above
and want to use this URL shortener you should self-host an instance. It's easy
to do (one of the design goals). Below are instructions detailing how.
<p>
<pre><code>| How to self-host:
|
| 1. Install dependencies:
| a. The Go programming language:
| <a href='https://golang.org/doc/install'>https://golang.org/doc/install</a>
| b. Git version control.
| <a href='https://git-scm.com/book/en/v2/Getting-Started-Installing-Git'>https://git-scm.com/book/en/v2/Getting-Started-Installing-Git</a>
|
| * On FreeBSD this would be:
| $ pkg install -y go git
|
| 2. Fetch, complile, and execute the source code:
| $ env GO111MODULE=off go get git.fsh.ee/i/link
| $ env GO111MODULE=off go run git.fsh.ee/i/link -url https://your.domain.com -port 8080 -db /path/to/sqlite/file -seed secret
|
| * The server is now running on localhost at port 8080.
| * If the SQLite database at this filepath does not exist it will be created.
| * All logging will be printed to standard error and standard output.</code></pre>
<footer style='white-space: pre;'>Source code: <a href='https://fsh.ee/src'>fsh.ee/src</a>
License: AGPL v3
Copy: 2021 i@fsh.ee
Made with: Go, Vim, and FreeBSD
Unrelated blog: <a href='https://etc.fsh.ee/'>etc.fsh.ee</a>
</footer>

253
main.go Normal file
View file

@ -0,0 +1,253 @@
// A URL Shortener called Link.
// Copyright (C) 2021 i@fsh.ee
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Affero General Public License for more details.
//
// You should have received a copy of the GNU Affero General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.
package main
import (
"crypto/md5"
_ "embed"
"errors"
"flag"
"fmt"
"hash/maphash"
"io/ioutil"
"log"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"time"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/gorm/logger"
)
//go:embed index.html
var indexTemplate string
type DB struct {
*gorm.DB
log *log.Logger
hashSeed string
}
func NewDB(l *log.Logger, dbFilePath, hashSeed string) (DB, error) {
_, err := os.Stat(dbFilePath)
if os.IsNotExist(err) {
err := ioutil.WriteFile(dbFilePath, []byte{}, 0600)
if err != nil {
return DB{}, err
}
}
db, err := gorm.Open(sqlite.Open(dbFilePath), &gorm.Config{
NowFunc: func() time.Time { return time.Now().UTC() },
Logger: logger.Default.LogMode(logger.Silent),
})
if err != nil {
return DB{}, err
}
return DB{db, l, hashSeed}, db.AutoMigrate(&Link{})
}
type Link struct {
gorm.Model
Big string
Smol string `gorm:"unique"`
Del string `gorm:"unique"`
}
func (db DB) getHashShortLink(s fmt.Stringer) string {
h := maphash.Hash{}
h.WriteString(s.String())
return strings.TrimSpace(strings.TrimLeft(fmt.Sprintf("%#x\n", h.Sum64()), "0x"))
}
func (db DB) getHashDeleteKey(s fmt.Stringer) string {
return strings.TrimSpace(fmt.Sprintf("%x", md5.Sum([]byte(db.hashSeed+s.String()+strconv.FormatInt(time.Now().Unix(), 10)))))
}
func (db DB) NewLink(u *url.URL) (Link, error) {
return db.NewLinkWithShortLink(u, db.getHashShortLink(u))
}
func (db DB) NewLinkWithShortLink(u *url.URL, hash string) (Link, error) {
var (
link = Link{Big: u.String(), Smol: hash, Del: db.getHashDeleteKey(u)}
err = db.Create(&link).Error
)
// TODO: If error due to unique issue should attempt to retry a few times.
if err != nil {
db.log.Println(err)
return Link{}, err
}
return link, nil
}
func (db DB) GetLink(smol string) (l Link, e error) {
res := db.Where(&Link{Smol: smol}).First(&l)
return l, res.Error
}
func (db DB) DelLink(smol, del string) error {
link, err := db.GetLink(smol)
if err != nil {
return err
}
res := db.Where(&Link{Del: del}).Delete(&link)
if res.RowsAffected < 1 {
return gorm.ErrRecordNotFound
}
return res.Error
}
type controller struct {
log *log.Logger
db DB
url string
}
func NewController(logger *log.Logger, db DB, url string) controller {
return controller{logger, db, strings.TrimRight(url, "/")}
}
func (c controller) Err(rw http.ResponseWriter, r *http.Request, err error) {
if errors.Is(err, gorm.ErrRecordNotFound) {
rw.WriteHeader(http.StatusNotFound)
rw.Write([]byte(err.Error()))
return
}
c.log.Println(err)
rw.WriteHeader(http.StatusInternalServerError)
rw.Write([]byte(err.Error()))
}
func (c controller) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
switch r.Method {
case http.MethodGet:
switch strings.TrimRight(r.URL.Path, "/") {
case "":
rw.Header().Add("Content-Type", "text/html")
rw.Write([]byte(fmt.Sprintf(indexTemplate, c.url, c.url, c.url)))
return
case "/favicon.ico":
http.NotFound(rw, r)
return
default:
link, err := c.db.GetLink(strings.TrimLeft(r.URL.Path, "/"))
if err != nil {
c.Err(rw, r, err)
return
}
http.Redirect(rw, r, link.Big, http.StatusPermanentRedirect)
return
}
case http.MethodPost:
b, err := ioutil.ReadAll(r.Body)
if err != nil {
c.Err(rw, r, err)
return
}
u, err := url.Parse(string(b))
if err != nil {
c.Err(rw, r, err)
return
}
if u.Scheme != "http" && u.Scheme != "https" {
rw.WriteHeader(http.StatusBadRequest)
rw.Write([]byte("URL must contain scheme. E.G. missing `http://` or `https://`."))
return
}
var (
link Link
h = strings.Trim(r.URL.Path, "/")
)
if h != "" {
link, err = c.db.NewLinkWithShortLink(u, h)
} else {
link, err = c.db.NewLink(u)
}
if err != nil {
c.Err(rw, r, err)
return
}
rw.Header().Set("X-Delete-With", link.Del)
rw.WriteHeader(http.StatusFound)
rw.Write([]byte(fmt.Sprintf("%s/%s", c.url, link.Smol)))
return
case http.MethodDelete:
b, err := ioutil.ReadAll(r.Body)
if err != nil {
c.Err(rw, r, err)
return
}
if len(b) < 1 {
rw.WriteHeader(http.StatusBadRequest)
rw.Write([]byte("Must include deletion key in DELETE body."))
return
}
var (
smol = strings.TrimSpace(strings.TrimLeft(r.URL.Path, "/"))
del = strings.TrimSpace(string(b))
)
if err := c.db.DelLink(smol, del); err != nil {
c.Err(rw, r, err)
return
}
rw.WriteHeader(http.StatusNoContent)
return
}
http.NotFound(rw, r)
}
func main() {
var (
logPrefix = "link: "
startupLogger = log.New(os.Stdout, logPrefix, 0)
applicationLogger = log.New(ioutil.Discard, logPrefix, 0)
v = flag.Bool("v", false, "verbose logging")
port = flag.Uint("port", 8080, "port to listen on")
dbFilePath = flag.String("db", "", "sqlite database filepath: required")
url = flag.String("url", "", "service url: required")
hashSeed = flag.String("seed", "", "hash seed: required")
)
flag.Parse()
if *dbFilePath == "" || *url == "" || *hashSeed == "" {
flag.Usage()
return
}
if *v {
applicationLogger = log.New(os.Stdout, logPrefix, 0)
}
db, err := NewDB(applicationLogger, *dbFilePath, *hashSeed)
if err != nil {
startupLogger.Fatal(err)
return
}
http.Handle("/", NewController(applicationLogger, db, *url))
startupLogger.Println("listening on port", *port)
startupLogger.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", *port), nil))
}

28
makefile Normal file
View file

@ -0,0 +1,28 @@
BIN=out
CC=go1.16
all: setup vendor gen build
setup:
@env GOOD=off go get golang.org/dl/go1.16
@env GOOD=off $(CC) download
vendor: go.mod go.sum
@$(CC) mod tidy
@$(CC) mod vendor
build:
@$(CC) build -ldflags='-s -w' -o $(BIN)
gen:
@$(CC) generate ./...
test:
@env $(ENV) $(CC) test ./... -cover -count 1
run: gen build
@clear
@env $(ENV) ./$(BIN) -v -url https://dev.fsh.ee -port 8080 -db /tmp/link_test_db_1.sql -seed secret
dev:
@find . -type f | grep -E '(.*)\.(go|html)' | entr -cr make run

21
vendor/github.com/jinzhu/inflection/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015 - Jinzhu
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

55
vendor/github.com/jinzhu/inflection/README.md generated vendored Normal file
View file

@ -0,0 +1,55 @@
# Inflection
Inflection pluralizes and singularizes English nouns
[![wercker status](https://app.wercker.com/status/f8c7432b097d1f4ce636879670be0930/s/master "wercker status")](https://app.wercker.com/project/byKey/f8c7432b097d1f4ce636879670be0930)
## Basic Usage
```go
inflection.Plural("person") => "people"
inflection.Plural("Person") => "People"
inflection.Plural("PERSON") => "PEOPLE"
inflection.Plural("bus") => "buses"
inflection.Plural("BUS") => "BUSES"
inflection.Plural("Bus") => "Buses"
inflection.Singular("people") => "person"
inflection.Singular("People") => "Person"
inflection.Singular("PEOPLE") => "PERSON"
inflection.Singular("buses") => "bus"
inflection.Singular("BUSES") => "BUS"
inflection.Singular("Buses") => "Bus"
inflection.Plural("FancyPerson") => "FancyPeople"
inflection.Singular("FancyPeople") => "FancyPerson"
```
## Register Rules
Standard rules are from Rails's ActiveSupport (https://github.com/rails/rails/blob/master/activesupport/lib/active_support/inflections.rb)
If you want to register more rules, follow:
```
inflection.AddUncountable("fish")
inflection.AddIrregular("person", "people")
inflection.AddPlural("(bu)s$", "${1}ses") # "bus" => "buses" / "BUS" => "BUSES" / "Bus" => "Buses"
inflection.AddSingular("(bus)(es)?$", "${1}") # "buses" => "bus" / "Buses" => "Bus" / "BUSES" => "BUS"
```
## Contributing
You can help to make the project better, check out [http://gorm.io/contribute.html](http://gorm.io/contribute.html) for things you can do.
## Author
**jinzhu**
* <http://github.com/jinzhu>
* <wosmvp@gmail.com>
* <http://twitter.com/zhangjinzhu>
## License
Released under the [MIT License](http://www.opensource.org/licenses/MIT).

1
vendor/github.com/jinzhu/inflection/go.mod generated vendored Normal file
View file

@ -0,0 +1 @@
module github.com/jinzhu/inflection

273
vendor/github.com/jinzhu/inflection/inflections.go generated vendored Normal file
View file

@ -0,0 +1,273 @@
/*
Package inflection pluralizes and singularizes English nouns.
inflection.Plural("person") => "people"
inflection.Plural("Person") => "People"
inflection.Plural("PERSON") => "PEOPLE"
inflection.Singular("people") => "person"
inflection.Singular("People") => "Person"
inflection.Singular("PEOPLE") => "PERSON"
inflection.Plural("FancyPerson") => "FancydPeople"
inflection.Singular("FancyPeople") => "FancydPerson"
Standard rules are from Rails's ActiveSupport (https://github.com/rails/rails/blob/master/activesupport/lib/active_support/inflections.rb)
If you want to register more rules, follow:
inflection.AddUncountable("fish")
inflection.AddIrregular("person", "people")
inflection.AddPlural("(bu)s$", "${1}ses") # "bus" => "buses" / "BUS" => "BUSES" / "Bus" => "Buses"
inflection.AddSingular("(bus)(es)?$", "${1}") # "buses" => "bus" / "Buses" => "Bus" / "BUSES" => "BUS"
*/
package inflection
import (
"regexp"
"strings"
)
type inflection struct {
regexp *regexp.Regexp
replace string
}
// Regular is a regexp find replace inflection
type Regular struct {
find string
replace string
}
// Irregular is a hard replace inflection,
// containing both singular and plural forms
type Irregular struct {
singular string
plural string
}
// RegularSlice is a slice of Regular inflections
type RegularSlice []Regular
// IrregularSlice is a slice of Irregular inflections
type IrregularSlice []Irregular
var pluralInflections = RegularSlice{
{"([a-z])$", "${1}s"},
{"s$", "s"},
{"^(ax|test)is$", "${1}es"},
{"(octop|vir)us$", "${1}i"},
{"(octop|vir)i$", "${1}i"},
{"(alias|status)$", "${1}es"},
{"(bu)s$", "${1}ses"},
{"(buffal|tomat)o$", "${1}oes"},
{"([ti])um$", "${1}a"},
{"([ti])a$", "${1}a"},
{"sis$", "ses"},
{"(?:([^f])fe|([lr])f)$", "${1}${2}ves"},
{"(hive)$", "${1}s"},
{"([^aeiouy]|qu)y$", "${1}ies"},
{"(x|ch|ss|sh)$", "${1}es"},
{"(matr|vert|ind)(?:ix|ex)$", "${1}ices"},
{"^(m|l)ouse$", "${1}ice"},
{"^(m|l)ice$", "${1}ice"},
{"^(ox)$", "${1}en"},
{"^(oxen)$", "${1}"},
{"(quiz)$", "${1}zes"},
}
var singularInflections = RegularSlice{
{"s$", ""},
{"(ss)$", "${1}"},
{"(n)ews$", "${1}ews"},
{"([ti])a$", "${1}um"},
{"((a)naly|(b)a|(d)iagno|(p)arenthe|(p)rogno|(s)ynop|(t)he)(sis|ses)$", "${1}sis"},
{"(^analy)(sis|ses)$", "${1}sis"},
{"([^f])ves$", "${1}fe"},
{"(hive)s$", "${1}"},
{"(tive)s$", "${1}"},
{"([lr])ves$", "${1}f"},
{"([^aeiouy]|qu)ies$", "${1}y"},
{"(s)eries$", "${1}eries"},
{"(m)ovies$", "${1}ovie"},
{"(c)ookies$", "${1}ookie"},
{"(x|ch|ss|sh)es$", "${1}"},
{"^(m|l)ice$", "${1}ouse"},
{"(bus)(es)?$", "${1}"},
{"(o)es$", "${1}"},
{"(shoe)s$", "${1}"},
{"(cris|test)(is|es)$", "${1}is"},
{"^(a)x[ie]s$", "${1}xis"},
{"(octop|vir)(us|i)$", "${1}us"},
{"(alias|status)(es)?$", "${1}"},
{"^(ox)en", "${1}"},
{"(vert|ind)ices$", "${1}ex"},
{"(matr)ices$", "${1}ix"},
{"(quiz)zes$", "${1}"},
{"(database)s$", "${1}"},
}
var irregularInflections = IrregularSlice{
{"person", "people"},
{"man", "men"},
{"child", "children"},
{"sex", "sexes"},
{"move", "moves"},
{"mombie", "mombies"},
}
var uncountableInflections = []string{"equipment", "information", "rice", "money", "species", "series", "fish", "sheep", "jeans", "police"}
var compiledPluralMaps []inflection
var compiledSingularMaps []inflection
func compile() {
compiledPluralMaps = []inflection{}
compiledSingularMaps = []inflection{}
for _, uncountable := range uncountableInflections {
inf := inflection{
regexp: regexp.MustCompile("^(?i)(" + uncountable + ")$"),
replace: "${1}",
}
compiledPluralMaps = append(compiledPluralMaps, inf)
compiledSingularMaps = append(compiledSingularMaps, inf)
}
for _, value := range irregularInflections {
infs := []inflection{
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.singular) + "$"), replace: strings.ToUpper(value.plural)},
inflection{regexp: regexp.MustCompile(strings.Title(value.singular) + "$"), replace: strings.Title(value.plural)},
inflection{regexp: regexp.MustCompile(value.singular + "$"), replace: value.plural},
}
compiledPluralMaps = append(compiledPluralMaps, infs...)
}
for _, value := range irregularInflections {
infs := []inflection{
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.plural) + "$"), replace: strings.ToUpper(value.singular)},
inflection{regexp: regexp.MustCompile(strings.Title(value.plural) + "$"), replace: strings.Title(value.singular)},
inflection{regexp: regexp.MustCompile(value.plural + "$"), replace: value.singular},
}
compiledSingularMaps = append(compiledSingularMaps, infs...)
}
for i := len(pluralInflections) - 1; i >= 0; i-- {
value := pluralInflections[i]
infs := []inflection{
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.find)), replace: strings.ToUpper(value.replace)},
inflection{regexp: regexp.MustCompile(value.find), replace: value.replace},
inflection{regexp: regexp.MustCompile("(?i)" + value.find), replace: value.replace},
}
compiledPluralMaps = append(compiledPluralMaps, infs...)
}
for i := len(singularInflections) - 1; i >= 0; i-- {
value := singularInflections[i]
infs := []inflection{
inflection{regexp: regexp.MustCompile(strings.ToUpper(value.find)), replace: strings.ToUpper(value.replace)},
inflection{regexp: regexp.MustCompile(value.find), replace: value.replace},
inflection{regexp: regexp.MustCompile("(?i)" + value.find), replace: value.replace},
}
compiledSingularMaps = append(compiledSingularMaps, infs...)
}
}
func init() {
compile()
}
// AddPlural adds a plural inflection
func AddPlural(find, replace string) {
pluralInflections = append(pluralInflections, Regular{find, replace})
compile()
}
// AddSingular adds a singular inflection
func AddSingular(find, replace string) {
singularInflections = append(singularInflections, Regular{find, replace})
compile()
}
// AddIrregular adds an irregular inflection
func AddIrregular(singular, plural string) {
irregularInflections = append(irregularInflections, Irregular{singular, plural})
compile()
}
// AddUncountable adds an uncountable inflection
func AddUncountable(values ...string) {
uncountableInflections = append(uncountableInflections, values...)
compile()
}
// GetPlural retrieves the plural inflection values
func GetPlural() RegularSlice {
plurals := make(RegularSlice, len(pluralInflections))
copy(plurals, pluralInflections)
return plurals
}
// GetSingular retrieves the singular inflection values
func GetSingular() RegularSlice {
singulars := make(RegularSlice, len(singularInflections))
copy(singulars, singularInflections)
return singulars
}
// GetIrregular retrieves the irregular inflection values
func GetIrregular() IrregularSlice {
irregular := make(IrregularSlice, len(irregularInflections))
copy(irregular, irregularInflections)
return irregular
}
// GetUncountable retrieves the uncountable inflection values
func GetUncountable() []string {
uncountables := make([]string, len(uncountableInflections))
copy(uncountables, uncountableInflections)
return uncountables
}
// SetPlural sets the plural inflections slice
func SetPlural(inflections RegularSlice) {
pluralInflections = inflections
compile()
}
// SetSingular sets the singular inflections slice
func SetSingular(inflections RegularSlice) {
singularInflections = inflections
compile()
}
// SetIrregular sets the irregular inflections slice
func SetIrregular(inflections IrregularSlice) {
irregularInflections = inflections
compile()
}
// SetUncountable sets the uncountable inflections slice
func SetUncountable(inflections []string) {
uncountableInflections = inflections
compile()
}
// Plural converts a word to its plural form
func Plural(str string) string {
for _, inflection := range compiledPluralMaps {
if inflection.regexp.MatchString(str) {
return inflection.regexp.ReplaceAllString(str, inflection.replace)
}
}
return str
}
// Singular converts a word to its singular form
func Singular(str string) string {
for _, inflection := range compiledSingularMaps {
if inflection.regexp.MatchString(str) {
return inflection.regexp.ReplaceAllString(str, inflection.replace)
}
}
return str
}

23
vendor/github.com/jinzhu/inflection/wercker.yml generated vendored Normal file
View file

@ -0,0 +1,23 @@
box: golang
build:
steps:
- setup-go-workspace
# Gets the dependencies
- script:
name: go get
code: |
go get
# Build the project
- script:
name: go build
code: |
go build ./...
# Test the project
- script:
name: go test
code: |
go test ./...

3
vendor/github.com/jinzhu/now/Guardfile generated vendored Normal file
View file

@ -0,0 +1,3 @@
guard 'gotest' do
watch(%r{\.go$})
end

21
vendor/github.com/jinzhu/now/License generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013-NOW Jinzhu <wosmvp@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

131
vendor/github.com/jinzhu/now/README.md generated vendored Normal file
View file

@ -0,0 +1,131 @@
## Now
Now is a time toolkit for golang
[![wercker status](https://app.wercker.com/status/a350da4eae6cb28a35687ba41afb565a/s/master "wercker status")](https://app.wercker.com/project/byKey/a350da4eae6cb28a35687ba41afb565a)
## Install
```
go get -u github.com/jinzhu/now
```
## Usage
Calculating time based on current time
```go
import "github.com/jinzhu/now"
time.Now() // 2013-11-18 17:51:49.123456789 Mon
now.BeginningOfMinute() // 2013-11-18 17:51:00 Mon
now.BeginningOfHour() // 2013-11-18 17:00:00 Mon
now.BeginningOfDay() // 2013-11-18 00:00:00 Mon
now.BeginningOfWeek() // 2013-11-17 00:00:00 Sun
now.BeginningOfMonth() // 2013-11-01 00:00:00 Fri
now.BeginningOfQuarter() // 2013-10-01 00:00:00 Tue
now.BeginningOfYear() // 2013-01-01 00:00:00 Tue
now.EndOfMinute() // 2013-11-18 17:51:59.999999999 Mon
now.EndOfHour() // 2013-11-18 17:59:59.999999999 Mon
now.EndOfDay() // 2013-11-18 23:59:59.999999999 Mon
now.EndOfWeek() // 2013-11-23 23:59:59.999999999 Sat
now.EndOfMonth() // 2013-11-30 23:59:59.999999999 Sat
now.EndOfQuarter() // 2013-12-31 23:59:59.999999999 Tue
now.EndOfYear() // 2013-12-31 23:59:59.999999999 Tue
now.WeekStartDay = time.Monday // Set Monday as first day, default is Sunday
now.EndOfWeek() // 2013-11-24 23:59:59.999999999 Sun
```
Calculating time based on another time
```go
t := time.Date(2013, 02, 18, 17, 51, 49, 123456789, time.Now().Location())
now.With(t).EndOfMonth() // 2013-02-28 23:59:59.999999999 Thu
```
Calculating time based on configuration
```go
location, err := time.LoadLocation("Asia/Shanghai")
myConfig := &now.Config{
WeekStartDay: time.Monday,
TimeLocation: location,
TimeFormats: []string{"2006-01-02 15:04:05"},
}
t := time.Date(2013, 11, 18, 17, 51, 49, 123456789, time.Now().Location()) // // 2013-11-18 17:51:49.123456789 Mon
myConfig.With(t).BeginningOfWeek() // 2013-11-18 00:00:00 Mon
myConfig.Parse("2002-10-12 22:14:01") // 2002-10-12 22:14:01
myConfig.Parse("2002-10-12 22:14") // returns error 'can't parse string as time: 2002-10-12 22:14'
```
### Monday/Sunday
Don't be bothered with the `WeekStartDay` setting, you can use `Monday`, `Sunday`
```go
now.Monday() // 2013-11-18 00:00:00 Mon
now.Sunday() // 2013-11-24 00:00:00 Sun (Next Sunday)
now.EndOfSunday() // 2013-11-24 23:59:59.999999999 Sun (End of next Sunday)
t := time.Date(2013, 11, 24, 17, 51, 49, 123456789, time.Now().Location()) // 2013-11-24 17:51:49.123456789 Sun
now.With(t).Monday() // 2013-11-18 00:00:00 Sun (Last Monday if today is Sunday)
now.With(t).Sunday() // 2013-11-24 00:00:00 Sun (Beginning Of Today if today is Sunday)
now.With(t).EndOfSunday() // 2013-11-24 23:59:59.999999999 Sun (End of Today if today is Sunday)
```
### Parse String to Time
```go
time.Now() // 2013-11-18 17:51:49.123456789 Mon
// Parse(string) (time.Time, error)
t, err := now.Parse("2017") // 2017-01-01 00:00:00, nil
t, err := now.Parse("2017-10") // 2017-10-01 00:00:00, nil
t, err := now.Parse("2017-10-13") // 2017-10-13 00:00:00, nil
t, err := now.Parse("1999-12-12 12") // 1999-12-12 12:00:00, nil
t, err := now.Parse("1999-12-12 12:20") // 1999-12-12 12:20:00, nil
t, err := now.Parse("1999-12-12 12:20:21") // 1999-12-12 12:20:00, nil
t, err := now.Parse("10-13") // 2013-10-13 00:00:00, nil
t, err := now.Parse("12:20") // 2013-11-18 12:20:00, nil
t, err := now.Parse("12:20:13") // 2013-11-18 12:20:13, nil
t, err := now.Parse("14") // 2013-11-18 14:00:00, nil
t, err := now.Parse("99:99") // 2013-11-18 12:20:00, Can't parse string as time: 99:99
// MustParse must parse string to time or it will panic
now.MustParse("2013-01-13") // 2013-01-13 00:00:00
now.MustParse("02-17") // 2013-02-17 00:00:00
now.MustParse("2-17") // 2013-02-17 00:00:00
now.MustParse("8") // 2013-11-18 08:00:00
now.MustParse("2002-10-12 22:14") // 2002-10-12 22:14:00
now.MustParse("99:99") // panic: Can't parse string as time: 99:99
```
Extend `now` to support more formats is quite easy, just update `now.TimeFormats` with other time layouts, e.g:
```go
now.TimeFormats = append(now.TimeFormats, "02 Jan 2006 15:04")
```
Please send me pull requests if you want a format to be supported officially
## Contributing
You can help to make the project better, check out [http://gorm.io/contribute.html](http://gorm.io/contribute.html) for things you can do.
# Author
**jinzhu**
* <http://github.com/jinzhu>
* <wosmvp@gmail.com>
* <http://twitter.com/zhangjinzhu>
## License
Released under the [MIT License](http://www.opensource.org/licenses/MIT).

3
vendor/github.com/jinzhu/now/go.mod generated vendored Normal file
View file

@ -0,0 +1,3 @@
module github.com/jinzhu/now
go 1.12

194
vendor/github.com/jinzhu/now/main.go generated vendored Normal file
View file

@ -0,0 +1,194 @@
// Package now is a time toolkit for golang.
//
// More details README here: https://github.com/jinzhu/now
//
// import "github.com/jinzhu/now"
//
// now.BeginningOfMinute() // 2013-11-18 17:51:00 Mon
// now.BeginningOfDay() // 2013-11-18 00:00:00 Mon
// now.EndOfDay() // 2013-11-18 23:59:59.999999999 Mon
package now
import "time"
// WeekStartDay set week start day, default is sunday
var WeekStartDay = time.Sunday
// TimeFormats default time formats will be parsed as
var TimeFormats = []string{
"2006", "2006-1", "2006-1-2", "2006-1-2 15", "2006-1-2 15:4", "2006-1-2 15:4:5", "1-2",
"15:4:5", "15:4", "15",
"15:4:5 Jan 2, 2006 MST", "2006-01-02 15:04:05.999999999 -0700 MST", "2006-01-02T15:04:05-07:00",
"2006.1.2", "2006.1.2 15:04:05", "2006.01.02", "2006.01.02 15:04:05", "2006.01.02 15:04:05.999999999",
"1/2/2006", "1/2/2006 15:4:5", "2006/01/02", "2006/01/02 15:04:05",
time.ANSIC, time.UnixDate, time.RubyDate, time.RFC822, time.RFC822Z, time.RFC850,
time.RFC1123, time.RFC1123Z, time.RFC3339, time.RFC3339Nano,
time.Kitchen, time.Stamp, time.StampMilli, time.StampMicro, time.StampNano,
}
// Config configuration for now package
type Config struct {
WeekStartDay time.Weekday
TimeLocation *time.Location
TimeFormats []string
}
// DefaultConfig default config
var DefaultConfig *Config
// New initialize Now based on configuration
func (config *Config) With(t time.Time) *Now {
return &Now{Time: t, Config: config}
}
// Parse parse string to time based on configuration
func (config *Config) Parse(strs ...string) (time.Time, error) {
if config.TimeLocation == nil {
return config.With(time.Now()).Parse(strs...)
} else {
return config.With(time.Now().In(config.TimeLocation)).Parse(strs...)
}
}
// MustParse must parse string to time or will panic
func (config *Config) MustParse(strs ...string) time.Time {
if config.TimeLocation == nil {
return config.With(time.Now()).MustParse(strs...)
} else {
return config.With(time.Now().In(config.TimeLocation)).MustParse(strs...)
}
}
// Now now struct
type Now struct {
time.Time
*Config
}
// With initialize Now with time
func With(t time.Time) *Now {
config := DefaultConfig
if config == nil {
config = &Config{
WeekStartDay: WeekStartDay,
TimeFormats: TimeFormats,
}
}
return &Now{Time: t, Config: config}
}
// New initialize Now with time
func New(t time.Time) *Now {
return With(t)
}
// BeginningOfMinute beginning of minute
func BeginningOfMinute() time.Time {
return With(time.Now()).BeginningOfMinute()
}
// BeginningOfHour beginning of hour
func BeginningOfHour() time.Time {
return With(time.Now()).BeginningOfHour()
}
// BeginningOfDay beginning of day
func BeginningOfDay() time.Time {
return With(time.Now()).BeginningOfDay()
}
// BeginningOfWeek beginning of week
func BeginningOfWeek() time.Time {
return With(time.Now()).BeginningOfWeek()
}
// BeginningOfMonth beginning of month
func BeginningOfMonth() time.Time {
return With(time.Now()).BeginningOfMonth()
}
// BeginningOfQuarter beginning of quarter
func BeginningOfQuarter() time.Time {
return With(time.Now()).BeginningOfQuarter()
}
// BeginningOfYear beginning of year
func BeginningOfYear() time.Time {
return With(time.Now()).BeginningOfYear()
}
// EndOfMinute end of minute
func EndOfMinute() time.Time {
return With(time.Now()).EndOfMinute()
}
// EndOfHour end of hour
func EndOfHour() time.Time {
return With(time.Now()).EndOfHour()
}
// EndOfDay end of day
func EndOfDay() time.Time {
return With(time.Now()).EndOfDay()
}
// EndOfWeek end of week
func EndOfWeek() time.Time {
return With(time.Now()).EndOfWeek()
}
// EndOfMonth end of month
func EndOfMonth() time.Time {
return With(time.Now()).EndOfMonth()
}
// EndOfQuarter end of quarter
func EndOfQuarter() time.Time {
return With(time.Now()).EndOfQuarter()
}
// EndOfYear end of year
func EndOfYear() time.Time {
return With(time.Now()).EndOfYear()
}
// Monday monday
func Monday() time.Time {
return With(time.Now()).Monday()
}
// Sunday sunday
func Sunday() time.Time {
return With(time.Now()).Sunday()
}
// EndOfSunday end of sunday
func EndOfSunday() time.Time {
return With(time.Now()).EndOfSunday()
}
// Parse parse string to time
func Parse(strs ...string) (time.Time, error) {
return With(time.Now()).Parse(strs...)
}
// ParseInLocation parse string to time in location
func ParseInLocation(loc *time.Location, strs ...string) (time.Time, error) {
return With(time.Now().In(loc)).Parse(strs...)
}
// MustParse must parse string to time or will panic
func MustParse(strs ...string) time.Time {
return With(time.Now()).MustParse(strs...)
}
// MustParseInLocation must parse string to time in location or will panic
func MustParseInLocation(loc *time.Location, strs ...string) time.Time {
return With(time.Now().In(loc)).MustParse(strs...)
}
// Between check now between the begin, end time or not
func Between(time1, time2 string) bool {
return With(time.Now()).Between(time1, time2)
}

212
vendor/github.com/jinzhu/now/now.go generated vendored Normal file
View file

@ -0,0 +1,212 @@
package now
import (
"errors"
"regexp"
"time"
)
// BeginningOfMinute beginning of minute
func (now *Now) BeginningOfMinute() time.Time {
return now.Truncate(time.Minute)
}
// BeginningOfHour beginning of hour
func (now *Now) BeginningOfHour() time.Time {
y, m, d := now.Date()
return time.Date(y, m, d, now.Time.Hour(), 0, 0, 0, now.Time.Location())
}
// BeginningOfDay beginning of day
func (now *Now) BeginningOfDay() time.Time {
y, m, d := now.Date()
return time.Date(y, m, d, 0, 0, 0, 0, now.Time.Location())
}
// BeginningOfWeek beginning of week
func (now *Now) BeginningOfWeek() time.Time {
t := now.BeginningOfDay()
weekday := int(t.Weekday())
if now.WeekStartDay != time.Sunday {
weekStartDayInt := int(now.WeekStartDay)
if weekday < weekStartDayInt {
weekday = weekday + 7 - weekStartDayInt
} else {
weekday = weekday - weekStartDayInt
}
}
return t.AddDate(0, 0, -weekday)
}
// BeginningOfMonth beginning of month
func (now *Now) BeginningOfMonth() time.Time {
y, m, _ := now.Date()
return time.Date(y, m, 1, 0, 0, 0, 0, now.Location())
}
// BeginningOfQuarter beginning of quarter
func (now *Now) BeginningOfQuarter() time.Time {
month := now.BeginningOfMonth()
offset := (int(month.Month()) - 1) % 3
return month.AddDate(0, -offset, 0)
}
// BeginningOfHalf beginning of half year
func (now *Now) BeginningOfHalf() time.Time {
month := now.BeginningOfMonth()
offset := (int(month.Month()) - 1) % 6
return month.AddDate(0, -offset, 0)
}
// BeginningOfYear BeginningOfYear beginning of year
func (now *Now) BeginningOfYear() time.Time {
y, _, _ := now.Date()
return time.Date(y, time.January, 1, 0, 0, 0, 0, now.Location())
}
// EndOfMinute end of minute
func (now *Now) EndOfMinute() time.Time {
return now.BeginningOfMinute().Add(time.Minute - time.Nanosecond)
}
// EndOfHour end of hour
func (now *Now) EndOfHour() time.Time {
return now.BeginningOfHour().Add(time.Hour - time.Nanosecond)
}
// EndOfDay end of day
func (now *Now) EndOfDay() time.Time {
y, m, d := now.Date()
return time.Date(y, m, d, 23, 59, 59, int(time.Second-time.Nanosecond), now.Location())
}
// EndOfWeek end of week
func (now *Now) EndOfWeek() time.Time {
return now.BeginningOfWeek().AddDate(0, 0, 7).Add(-time.Nanosecond)
}
// EndOfMonth end of month
func (now *Now) EndOfMonth() time.Time {
return now.BeginningOfMonth().AddDate(0, 1, 0).Add(-time.Nanosecond)
}
// EndOfQuarter end of quarter
func (now *Now) EndOfQuarter() time.Time {
return now.BeginningOfQuarter().AddDate(0, 3, 0).Add(-time.Nanosecond)
}
// EndOfHalf end of half year
func (now *Now) EndOfHalf() time.Time {
return now.BeginningOfHalf().AddDate(0, 6, 0).Add(-time.Nanosecond)
}
// EndOfYear end of year
func (now *Now) EndOfYear() time.Time {
return now.BeginningOfYear().AddDate(1, 0, 0).Add(-time.Nanosecond)
}
// Monday monday
func (now *Now) Monday() time.Time {
t := now.BeginningOfDay()
weekday := int(t.Weekday())
if weekday == 0 {
weekday = 7
}
return t.AddDate(0, 0, -weekday+1)
}
// Sunday sunday
func (now *Now) Sunday() time.Time {
t := now.BeginningOfDay()
weekday := int(t.Weekday())
if weekday == 0 {
return t
}
return t.AddDate(0, 0, (7 - weekday))
}
// EndOfSunday end of sunday
func (now *Now) EndOfSunday() time.Time {
return New(now.Sunday()).EndOfDay()
}
func (now *Now) parseWithFormat(str string, location *time.Location) (t time.Time, err error) {
for _, format := range now.TimeFormats {
t, err = time.ParseInLocation(format, str, location)
if err == nil {
return
}
}
err = errors.New("Can't parse string as time: " + str)
return
}
var hasTimeRegexp = regexp.MustCompile(`(\s+|^\s*)\d{1,2}((:\d{1,2})*|((:\d{1,2}){2}\.(\d{3}|\d{6}|\d{9})))\s*$`) // match 15:04:05, 15:04:05.000, 15:04:05.000000 15, 2017-01-01 15:04, etc
var onlyTimeRegexp = regexp.MustCompile(`^\s*\d{1,2}((:\d{1,2})*|((:\d{1,2}){2}\.(\d{3}|\d{6}|\d{9})))\s*$`) // match 15:04:05, 15, 15:04:05.000, 15:04:05.000000, etc
// Parse parse string to time
func (now *Now) Parse(strs ...string) (t time.Time, err error) {
var (
setCurrentTime bool
parseTime []int
currentLocation = now.Location()
onlyTimeInStr = true
currentTime = formatTimeToList(now.Time)
)
for _, str := range strs {
hasTimeInStr := hasTimeRegexp.MatchString(str) // match 15:04:05, 15
onlyTimeInStr = hasTimeInStr && onlyTimeInStr && onlyTimeRegexp.MatchString(str)
if t, err = now.parseWithFormat(str, currentLocation); err == nil {
location := t.Location()
parseTime = formatTimeToList(t)
for i, v := range parseTime {
// Don't reset hour, minute, second if current time str including time
if hasTimeInStr && i <= 3 {
continue
}
// If value is zero, replace it with current time
if v == 0 {
if setCurrentTime {
parseTime[i] = currentTime[i]
}
} else {
setCurrentTime = true
}
// if current time only includes time, should change day, month to current time
if onlyTimeInStr {
if i == 4 || i == 5 {
parseTime[i] = currentTime[i]
continue
}
}
}
t = time.Date(parseTime[6], time.Month(parseTime[5]), parseTime[4], parseTime[3], parseTime[2], parseTime[1], parseTime[0], location)
currentTime = formatTimeToList(t)
}
}
return
}
// MustParse must parse string to time or it will panic
func (now *Now) MustParse(strs ...string) (t time.Time) {
t, err := now.Parse(strs...)
if err != nil {
panic(err)
}
return t
}
// Between check time between the begin, end time or not
func (now *Now) Between(begin, end string) bool {
beginTime := now.MustParse(begin)
endTime := now.MustParse(end)
return now.After(beginTime) && now.Before(endTime)
}

9
vendor/github.com/jinzhu/now/time.go generated vendored Normal file
View file

@ -0,0 +1,9 @@
package now
import "time"
func formatTimeToList(t time.Time) []int {
hour, min, sec := t.Clock()
year, month, day := t.Date()
return []int{t.Nanosecond(), sec, min, hour, day, int(month), year}
}

23
vendor/github.com/jinzhu/now/wercker.yml generated vendored Normal file
View file

@ -0,0 +1,23 @@
box: golang
build:
steps:
- setup-go-workspace
# Gets the dependencies
- script:
name: go get
code: |
go get
# Build the project
- script:
name: go build
code: |
go build ./...
# Test the project
- script:
name: go test
code: |
go test ./...

4
vendor/github.com/mattn/go-sqlite3/.codecov.yml generated vendored Normal file
View file

@ -0,0 +1,4 @@
coverage:
status:
project: off
patch: off

14
vendor/github.com/mattn/go-sqlite3/.gitignore generated vendored Normal file
View file

@ -0,0 +1,14 @@
*.db
*.exe
*.dll
*.o
# VSCode
.vscode
# Exclude from upgrade
upgrade/*.c
upgrade/*.h
# Exclude upgrade binary
upgrade/upgrade

21
vendor/github.com/mattn/go-sqlite3/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Yasuhiro Matsumoto
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

590
vendor/github.com/mattn/go-sqlite3/README.md generated vendored Normal file
View file

@ -0,0 +1,590 @@
go-sqlite3
==========
[![GoDoc Reference](https://godoc.org/github.com/mattn/go-sqlite3?status.svg)](http://godoc.org/github.com/mattn/go-sqlite3)
[![GitHub Actions](https://github.com/mattn/go-sqlite3/workflows/Go/badge.svg)](https://github.com/mattn/go-sqlite3/actions?query=workflow%3AGo)
[![Financial Contributors on Open Collective](https://opencollective.com/mattn-go-sqlite3/all/badge.svg?label=financial+contributors)](https://opencollective.com/mattn-go-sqlite3)
[![codecov](https://codecov.io/gh/mattn/go-sqlite3/branch/master/graph/badge.svg)](https://codecov.io/gh/mattn/go-sqlite3)
[![Go Report Card](https://goreportcard.com/badge/github.com/mattn/go-sqlite3)](https://goreportcard.com/report/github.com/mattn/go-sqlite3)
Latest stable version is v1.14 or later not v2.
~~**NOTE:** The increase to v2 was an accident. There were no major changes or features.~~
# Description
sqlite3 driver conforming to the built-in database/sql interface
Supported Golang version: See .github/workflows/go.yaml
[This package follows the official Golang Release Policy.](https://golang.org/doc/devel/release.html#policy)
### Overview
- [go-sqlite3](#go-sqlite3)
- [Description](#description)
- [Overview](#overview)
- [Installation](#installation)
- [API Reference](#api-reference)
- [Connection String](#connection-string)
- [DSN Examples](#dsn-examples)
- [Features](#features)
- [Usage](#usage)
- [Feature / Extension List](#feature--extension-list)
- [Compilation](#compilation)
- [Android](#android)
- [ARM](#arm)
- [Cross Compile](#cross-compile)
- [Google Cloud Platform](#google-cloud-platform)
- [Linux](#linux)
- [Alpine](#alpine)
- [Fedora](#fedora)
- [Ubuntu](#ubuntu)
- [Mac OSX](#mac-osx)
- [Windows](#windows)
- [Errors](#errors)
- [User Authentication](#user-authentication)
- [Compile](#compile)
- [Usage](#usage-1)
- [Create protected database](#create-protected-database)
- [Password Encoding](#password-encoding)
- [Available Encoders](#available-encoders)
- [Restrictions](#restrictions)
- [Support](#support)
- [User Management](#user-management)
- [SQL](#sql)
- [Examples](#examples)
- [*SQLiteConn](#sqliteconn)
- [Attached database](#attached-database)
- [Extensions](#extensions)
- [Spatialite](#spatialite)
- [FAQ](#faq)
- [License](#license)
- [Author](#author)
# Installation
This package can be installed with the go get command:
go get github.com/mattn/go-sqlite3
_go-sqlite3_ is *cgo* package.
If you want to build your app using go-sqlite3, you need gcc.
However, after you have built and installed _go-sqlite3_ with `go install github.com/mattn/go-sqlite3` (which requires gcc), you can build your app without relying on gcc in future.
***Important: because this is a `CGO` enabled package you are required to set the environment variable `CGO_ENABLED=1` and have a `gcc` compile present within your path.***
# API Reference
API documentation can be found here: http://godoc.org/github.com/mattn/go-sqlite3
Examples can be found under the [examples](./_example) directory
# Connection String
When creating a new SQLite database or connection to an existing one, with the file name additional options can be given.
This is also known as a DSN string. (Data Source Name).
Options are append after the filename of the SQLite database.
The database filename and options are seperated by an `?` (Question Mark).
Options should be URL-encoded (see [url.QueryEscape](https://golang.org/pkg/net/url/#QueryEscape)).
This also applies when using an in-memory database instead of a file.
Options can be given using the following format: `KEYWORD=VALUE` and multiple options can be combined with the `&` ampersand.
This library supports dsn options of SQLite itself and provides additional options.
Boolean values can be one of:
* `0` `no` `false` `off`
* `1` `yes` `true` `on`
| Name | Key | Value(s) | Description |
|------|-----|----------|-------------|
| UA - Create | `_auth` | - | Create User Authentication, for more information see [User Authentication](#user-authentication) |
| UA - Username | `_auth_user` | `string` | Username for User Authentication, for more information see [User Authentication](#user-authentication) |
| UA - Password | `_auth_pass` | `string` | Password for User Authentication, for more information see [User Authentication](#user-authentication) |
| UA - Crypt | `_auth_crypt` | <ul><li>SHA1</li><li>SSHA1</li><li>SHA256</li><li>SSHA256</li><li>SHA384</li><li>SSHA384</li><li>SHA512</li><li>SSHA512</li></ul> | Password encoder to use for User Authentication, for more information see [User Authentication](#user-authentication) |
| UA - Salt | `_auth_salt` | `string` | Salt to use if the configure password encoder requires a salt, for User Authentication, for more information see [User Authentication](#user-authentication) |
| Auto Vacuum | `_auto_vacuum` \| `_vacuum` | <ul><li>`0` \| `none`</li><li>`1` \| `full`</li><li>`2` \| `incremental`</li></ul> | For more information see [PRAGMA auto_vacuum](https://www.sqlite.org/pragma.html#pragma_auto_vacuum) |
| Busy Timeout | `_busy_timeout` \| `_timeout` | `int` | Specify value for sqlite3_busy_timeout. For more information see [PRAGMA busy_timeout](https://www.sqlite.org/pragma.html#pragma_busy_timeout) |
| Case Sensitive LIKE | `_case_sensitive_like` \| `_cslike` | `boolean` | For more information see [PRAGMA case_sensitive_like](https://www.sqlite.org/pragma.html#pragma_case_sensitive_like) |
| Defer Foreign Keys | `_defer_foreign_keys` \| `_defer_fk` | `boolean` | For more information see [PRAGMA defer_foreign_keys](https://www.sqlite.org/pragma.html#pragma_defer_foreign_keys) |
| Foreign Keys | `_foreign_keys` \| `_fk` | `boolean` | For more information see [PRAGMA foreign_keys](https://www.sqlite.org/pragma.html#pragma_foreign_keys) |
| Ignore CHECK Constraints | `_ignore_check_constraints` | `boolean` | For more information see [PRAGMA ignore_check_constraints](https://www.sqlite.org/pragma.html#pragma_ignore_check_constraints) |
| Immutable | `immutable` | `boolean` | For more information see [Immutable](https://www.sqlite.org/c3ref/open.html) |
| Journal Mode | `_journal_mode` \| `_journal` | <ul><li>DELETE</li><li>TRUNCATE</li><li>PERSIST</li><li>MEMORY</li><li>WAL</li><li>OFF</li></ul> | For more information see [PRAGMA journal_mode](https://www.sqlite.org/pragma.html#pragma_journal_mode) |
| Locking Mode | `_locking_mode` \| `_locking` | <ul><li>NORMAL</li><li>EXCLUSIVE</li></ul> | For more information see [PRAGMA locking_mode](https://www.sqlite.org/pragma.html#pragma_locking_mode) |
| Mode | `mode` | <ul><li>ro</li><li>rw</li><li>rwc</li><li>memory</li></ul> | Access Mode of the database. For more information see [SQLite Open](https://www.sqlite.org/c3ref/open.html) |
| Mutex Locking | `_mutex` | <ul><li>no</li><li>full</li></ul> | Specify mutex mode. |
| Query Only | `_query_only` | `boolean` | For more information see [PRAGMA query_only](https://www.sqlite.org/pragma.html#pragma_query_only) |
| Recursive Triggers | `_recursive_triggers` \| `_rt` | `boolean` | For more information see [PRAGMA recursive_triggers](https://www.sqlite.org/pragma.html#pragma_recursive_triggers) |
| Secure Delete | `_secure_delete` | `boolean` \| `FAST` | For more information see [PRAGMA secure_delete](https://www.sqlite.org/pragma.html#pragma_secure_delete) |
| Shared-Cache Mode | `cache` | <ul><li>shared</li><li>private</li></ul> | Set cache mode for more information see [sqlite.org](https://www.sqlite.org/sharedcache.html) |
| Synchronous | `_synchronous` \| `_sync` | <ul><li>0 \| OFF</li><li>1 \| NORMAL</li><li>2 \| FULL</li><li>3 \| EXTRA</li></ul> | For more information see [PRAGMA synchronous](https://www.sqlite.org/pragma.html#pragma_synchronous) |
| Time Zone Location | `_loc` | auto | Specify location of time format. |
| Transaction Lock | `_txlock` | <ul><li>immediate</li><li>deferred</li><li>exclusive</li></ul> | Specify locking behavior for transactions. |
| Writable Schema | `_writable_schema` | `Boolean` | When this pragma is on, the SQLITE_MASTER tables in which database can be changed using ordinary UPDATE, INSERT, and DELETE statements. Warning: misuse of this pragma can easily result in a corrupt database file. |
## DSN Examples
```
file:test.db?cache=shared&mode=memory
```
# Features
This package allows additional configuration of features available within SQLite3 to be enabled or disabled by golang build constraints also known as build `tags`.
[Click here for more information about build tags / constraints.](https://golang.org/pkg/go/build/#hdr-Build_Constraints)
### Usage
If you wish to build this library with additional extensions / features.
Use the following command.
```bash
go build --tags "<FEATURE>"
```
For available features see the extension list.
When using multiple build tags, all the different tags should be space delimted.
Example:
```bash
go build --tags "icu json1 fts5 secure_delete"
```
### Feature / Extension List
| Extension | Build Tag | Description |
|-----------|-----------|-------------|
| Additional Statistics | sqlite_stat4 | This option adds additional logic to the ANALYZE command and to the query planner that can help SQLite to chose a better query plan under certain situations. The ANALYZE command is enhanced to collect histogram data from all columns of every index and store that data in the sqlite_stat4 table.<br><br>The query planner will then use the histogram data to help it make better index choices. The downside of this compile-time option is that it violates the query planner stability guarantee making it more difficult to ensure consistent performance in mass-produced applications.<br><br>SQLITE_ENABLE_STAT4 is an enhancement of SQLITE_ENABLE_STAT3. STAT3 only recorded histogram data for the left-most column of each index whereas the STAT4 enhancement records histogram data from all columns of each index.<br><br>The SQLITE_ENABLE_STAT3 compile-time option is a no-op and is ignored if the SQLITE_ENABLE_STAT4 compile-time option is used |
| Allow URI Authority | sqlite_allow_uri_authority | URI filenames normally throws an error if the authority section is not either empty or "localhost".<br><br>However, if SQLite is compiled with the SQLITE_ALLOW_URI_AUTHORITY compile-time option, then the URI is converted into a Uniform Naming Convention (UNC) filename and passed down to the underlying operating system that way |
| App Armor | sqlite_app_armor | When defined, this C-preprocessor macro activates extra code that attempts to detect misuse of the SQLite API, such as passing in NULL pointers to required parameters or using objects after they have been destroyed. <br><br>App Armor is not available under `Windows`. |
| Disable Load Extensions | sqlite_omit_load_extension | Loading of external extensions is enabled by default.<br><br>To disable extension loading add the build tag `sqlite_omit_load_extension`. |
| Foreign Keys | sqlite_foreign_keys | This macro determines whether enforcement of foreign key constraints is enabled or disabled by default for new database connections.<br><br>Each database connection can always turn enforcement of foreign key constraints on and off and run-time using the foreign_keys pragma.<br><br>Enforcement of foreign key constraints is normally off by default, but if this compile-time parameter is set to 1, enforcement of foreign key constraints will be on by default |
| Full Auto Vacuum | sqlite_vacuum_full | Set the default auto vacuum to full |
| Incremental Auto Vacuum | sqlite_vacuum_incr | Set the default auto vacuum to incremental |
| Full Text Search Engine | sqlite_fts5 | When this option is defined in the amalgamation, versions 5 of the full-text search engine (fts5) is added to the build automatically |
| International Components for Unicode | sqlite_icu | This option causes the International Components for Unicode or "ICU" extension to SQLite to be added to the build |
| Introspect PRAGMAS | sqlite_introspect | This option adds some extra PRAGMA statements. <ul><li>PRAGMA function_list</li><li>PRAGMA module_list</li><li>PRAGMA pragma_list</li></ul> |
| JSON SQL Functions | sqlite_json | When this option is defined in the amalgamation, the JSON SQL functions are added to the build automatically |
| Pre Update Hook | sqlite_preupdate_hook | Registers a callback function that is invoked prior to each INSERT, UPDATE, and DELETE operation on a database table. |
| Secure Delete | sqlite_secure_delete | This compile-time option changes the default setting of the secure_delete pragma.<br><br>When this option is not used, secure_delete defaults to off. When this option is present, secure_delete defaults to on.<br><br>The secure_delete setting causes deleted content to be overwritten with zeros. There is a small performance penalty since additional I/O must occur.<br><br>On the other hand, secure_delete can prevent fragments of sensitive information from lingering in unused parts of the database file after it has been deleted. See the documentation on the secure_delete pragma for additional information |
| Secure Delete (FAST) | sqlite_secure_delete_fast | For more information see [PRAGMA secure_delete](https://www.sqlite.org/pragma.html#pragma_secure_delete) |
| Tracing / Debug | sqlite_trace | Activate trace functions |
| User Authentication | sqlite_userauth | SQLite User Authentication see [User Authentication](#user-authentication) for more information. |
# Compilation
This package requires `CGO_ENABLED=1` ennvironment variable if not set by default, and the presence of the `gcc` compiler.
If you need to add additional CFLAGS or LDFLAGS to the build command, and do not want to modify this package. Then this can be achieved by using the `CGO_CFLAGS` and `CGO_LDFLAGS` environment variables.
## Android
This package can be compiled for android.
Compile with:
```bash
go build --tags "android"
```
For more information see [#201](https://github.com/mattn/go-sqlite3/issues/201)
# ARM
To compile for `ARM` use the following environment.
```bash
env CC=arm-linux-gnueabihf-gcc CXX=arm-linux-gnueabihf-g++ \
CGO_ENABLED=1 GOOS=linux GOARCH=arm GOARM=7 \
go build -v
```
Additional information:
- [#242](https://github.com/mattn/go-sqlite3/issues/242)
- [#504](https://github.com/mattn/go-sqlite3/issues/504)
# Cross Compile
This library can be cross-compiled.
In some cases you are required to the `CC` environment variable with the cross compiler.
## Cross Compiling from MAC OSX
The simplest way to cross compile from OSX is to use [xgo](https://github.com/karalabe/xgo).
Steps:
- Install [xgo](https://github.com/karalabe/xgo) (`go get github.com/karalabe/xgo`).
- Ensure that your project is within your `GOPATH`.
- Run `xgo local/path/to/project`.
Please refer to the project's [README](https://github.com/karalabe/xgo/blob/master/README.md) for further information.
# Google Cloud Platform
Building on GCP is not possible because Google Cloud Platform does not allow `gcc` to be executed.
Please work only with compiled final binaries.
## Linux
To compile this package on Linux you must install the development tools for your linux distribution.
To compile under linux use the build tag `linux`.
```bash
go build --tags "linux"
```
If you wish to link directly to libsqlite3 then you can use the `libsqlite3` build tag.
```
go build --tags "libsqlite3 linux"
```
### Alpine
When building in an `alpine` container run the following command before building.
```
apk add --update gcc musl-dev
```
### Fedora
```bash
sudo yum groupinstall "Development Tools" "Development Libraries"
```
### Ubuntu
```bash
sudo apt-get install build-essential
```
## Mac OSX
OSX should have all the tools present to compile this package, if not install XCode this will add all the developers tools.
Required dependency
```bash
brew install sqlite3
```
For OSX there is an additional package install which is required if you wish to build the `icu` extension.
This additional package can be installed with `homebrew`.
```bash
brew upgrade icu4c
```
To compile for Mac OSX.
```bash
go build --tags "darwin"
```
If you wish to link directly to libsqlite3 then you can use the `libsqlite3` build tag.
```
go build --tags "libsqlite3 darwin"
```
Additional information:
- [#206](https://github.com/mattn/go-sqlite3/issues/206)
- [#404](https://github.com/mattn/go-sqlite3/issues/404)
## Windows
To compile this package on Windows OS you must have the `gcc` compiler installed.
1) Install a Windows `gcc` toolchain.
2) Add the `bin` folders to the Windows path if the installer did not do this by default.
3) Open a terminal for the TDM-GCC toolchain, can be found in the Windows Start menu.
4) Navigate to your project folder and run the `go build ...` command for this package.
For example the TDM-GCC Toolchain can be found [here](https://sourceforge.net/projects/tdm-gcc/).
## Errors
- Compile error: `can not be used when making a shared object; recompile with -fPIC`
When receiving a compile time error referencing recompile with `-FPIC` then you
are probably using a hardend system.
You can compile the library on a hardend system with the following command.
```bash
go build -ldflags '-extldflags=-fno-PIC'
```
More details see [#120](https://github.com/mattn/go-sqlite3/issues/120)
- Can't build go-sqlite3 on windows 64bit.
> Probably, you are using go 1.0, go1.0 has a problem when it comes to compiling/linking on windows 64bit.
> See: [#27](https://github.com/mattn/go-sqlite3/issues/27)
- `go get github.com/mattn/go-sqlite3` throws compilation error.
`gcc` throws: `internal compiler error`
Remove the download repository from your disk and try re-install with:
```bash
go install github.com/mattn/go-sqlite3
```
# User Authentication
This package supports the SQLite User Authentication module.
## Compile
To use the User authentication module the package has to be compiled with the tag `sqlite_userauth`. See [Features](#features).
## Usage
### Create protected database
To create a database protected by user authentication provide the following argument to the connection string `_auth`.
This will enable user authentication within the database. This option however requires two additional arguments:
- `_auth_user`
- `_auth_pass`
When `_auth` is present on the connection string user authentication will be enabled and the provided user will be created
as an `admin` user. After initial creation, the parameter `_auth` has no effect anymore and can be omitted from the connection string.
Example connection string:
Create an user authentication database with user `admin` and password `admin`.
`file:test.s3db?_auth&_auth_user=admin&_auth_pass=admin`
Create an user authentication database with user `admin` and password `admin` and use `SHA1` for the password encoding.
`file:test.s3db?_auth&_auth_user=admin&_auth_pass=admin&_auth_crypt=sha1`
### Password Encoding
The passwords within the user authentication module of SQLite are encoded with the SQLite function `sqlite_cryp`.
This function uses a ceasar-cypher which is quite insecure.
This library provides several additional password encoders which can be configured through the connection string.
The password cypher can be configured with the key `_auth_crypt`. And if the configured password encoder also requires an
salt this can be configured with `_auth_salt`.
#### Available Encoders
- SHA1
- SSHA1 (Salted SHA1)
- SHA256
- SSHA256 (salted SHA256)
- SHA384
- SSHA384 (salted SHA384)
- SHA512
- SSHA512 (salted SHA512)
### Restrictions
Operations on the database regarding to user management can only be preformed by an administrator user.
### Support
The user authentication supports two kinds of users
- administrators
- regular users
### User Management
User management can be done by directly using the `*SQLiteConn` or by SQL.
#### SQL
The following sql functions are available for user management.
| Function | Arguments | Description |
|----------|-----------|-------------|
| `authenticate` | username `string`, password `string` | Will authenticate an user, this is done by the connection; and should not be used manually. |
| `auth_user_add` | username `string`, password `string`, admin `int` | This function will add an user to the database.<br>if the database is not protected by user authentication it will enable it. Argument `admin` is an integer identifying if the added user should be an administrator. Only Administrators can add administrators. |
| `auth_user_change` | username `string`, password `string`, admin `int` | Function to modify an user. Users can change their own password, but only an administrator can change the administrator flag. |
| `authUserDelete` | username `string` | Delete an user from the database. Can only be used by an administrator. The current logged in administrator cannot be deleted. This is to make sure their is always an administrator remaining. |
These functions will return an integer.
- 0 (SQLITE_OK)
- 23 (SQLITE_AUTH) Failed to perform due to authentication or insufficient privileges
##### Examples
```sql
// Autheticate user
// Create Admin User
SELECT auth_user_add('admin2', 'admin2', 1);
// Change password for user
SELECT auth_user_change('user', 'userpassword', 0);
// Delete user
SELECT user_delete('user');
```
#### *SQLiteConn
The following functions are available for User authentication from the `*SQLiteConn`.
| Function | Description |
|----------|-------------|
| `Authenticate(username, password string) error` | Authenticate user |
| `AuthUserAdd(username, password string, admin bool) error` | Add user |
| `AuthUserChange(username, password string, admin bool) error` | Modify user |
| `AuthUserDelete(username string) error` | Delete user |
### Attached database
When using attached databases. SQLite will use the authentication from the `main` database for the attached database(s).
# Extensions
If you want your own extension to be listed here or you want to add a reference to an extension; please submit an Issue for this.
## Spatialite
Spatialite is available as an extension to SQLite, and can be used in combination with this repository.
For an example see [shaxbee/go-spatialite](https://github.com/shaxbee/go-spatialite).
## extension-functions.c from SQLite3 Contrib
extension-functions.c is available as an extension to SQLite, and provides the following functions:
- Math: acos, asin, atan, atn2, atan2, acosh, asinh, atanh, difference, degrees, radians, cos, sin, tan, cot, cosh, sinh, tanh, coth, exp, log, log10, power, sign, sqrt, square, ceil, floor, pi.
- String: replicate, charindex, leftstr, rightstr, ltrim, rtrim, trim, replace, reverse, proper, padl, padr, padc, strfilter.
- Aggregate: stdev, variance, mode, median, lower_quartile, upper_quartile
For an example see [dinedal/go-sqlite3-extension-functions](https://github.com/dinedal/go-sqlite3-extension-functions).
# FAQ
- Getting insert error while query is opened.
> You can pass some arguments into the connection string, for example, a URI.
> See: [#39](https://github.com/mattn/go-sqlite3/issues/39)
- Do you want to cross compile? mingw on Linux or Mac?
> See: [#106](https://github.com/mattn/go-sqlite3/issues/106)
> See also: http://www.limitlessfx.com/cross-compile-golang-app-for-windows-from-linux.html
- Want to get time.Time with current locale
Use `_loc=auto` in SQLite3 filename schema like `file:foo.db?_loc=auto`.
- Can I use this in multiple routines concurrently?
Yes for readonly. But, No for writable. See [#50](https://github.com/mattn/go-sqlite3/issues/50), [#51](https://github.com/mattn/go-sqlite3/issues/51), [#209](https://github.com/mattn/go-sqlite3/issues/209), [#274](https://github.com/mattn/go-sqlite3/issues/274).
- Why I'm getting `no such table` error?
Why is it racy if I use a `sql.Open("sqlite3", ":memory:")` database?
Each connection to `":memory:"` opens a brand new in-memory sql database, so if
the stdlib's sql engine happens to open another connection and you've only
specified `":memory:"`, that connection will see a brand new database. A
workaround is to use `"file::memory:?cache=shared"` (or `"file:foobar?mode=memory&cache=shared"`). Every
connection to this string will point to the same in-memory database.
Note that if the last database connection in the pool closes, the in-memory database is deleted. Make sure the [max idle connection limit](https://golang.org/pkg/database/sql/#DB.SetMaxIdleConns) is > 0, and the [connection lifetime](https://golang.org/pkg/database/sql/#DB.SetConnMaxLifetime) is infinite.
For more information see
* [#204](https://github.com/mattn/go-sqlite3/issues/204)
* [#511](https://github.com/mattn/go-sqlite3/issues/511)
* https://www.sqlite.org/sharedcache.html#shared_cache_and_in_memory_databases
* https://www.sqlite.org/inmemorydb.html#sharedmemdb
- Reading from database with large amount of goroutines fails on OSX.
OS X limits OS-wide to not have more than 1000 files open simultaneously by default.
For more information see [#289](https://github.com/mattn/go-sqlite3/issues/289)
- Trying to execute a `.` (dot) command throws an error.
Error: `Error: near ".": syntax error`
Dot command are part of SQLite3 CLI not of this library.
You need to implement the feature or call the sqlite3 cli.
More information see [#305](https://github.com/mattn/go-sqlite3/issues/305)
- Error: `database is locked`
When you get a database is locked. Please use the following options.
Add to DSN: `cache=shared`
Example:
```go
db, err := sql.Open("sqlite3", "file:locked.sqlite?cache=shared")
```
Second please set the database connections of the SQL package to 1.
```go
db.SetMaxOpenConns(1)
```
More information see [#209](https://github.com/mattn/go-sqlite3/issues/209)
## Contributors
### Code Contributors
This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
<a href="https://github.com/mattn/go-sqlite3/graphs/contributors"><img src="https://opencollective.com/mattn-go-sqlite3/contributors.svg?width=890&button=false" /></a>
### Financial Contributors
Become a financial contributor and help us sustain our community. [[Contribute](https://opencollective.com/mattn-go-sqlite3/contribute)]
#### Individuals
<a href="https://opencollective.com/mattn-go-sqlite3"><img src="https://opencollective.com/mattn-go-sqlite3/individuals.svg?width=890"></a>
#### Organizations
Support this project with your organization. Your logo will show up here with a link to your website. [[Contribute](https://opencollective.com/mattn-go-sqlite3/contribute)]
<a href="https://opencollective.com/mattn-go-sqlite3/organization/0/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/0/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/1/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/1/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/2/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/2/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/3/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/3/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/4/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/4/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/5/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/5/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/6/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/6/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/7/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/7/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/8/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/8/avatar.svg"></a>
<a href="https://opencollective.com/mattn-go-sqlite3/organization/9/website"><img src="https://opencollective.com/mattn-go-sqlite3/organization/9/avatar.svg"></a>
# License
MIT: http://mattn.mit-license.org/2018
sqlite3-binding.c, sqlite3-binding.h, sqlite3ext.h
The -binding suffix was added to avoid build failures under gccgo.
In this repository, those files are an amalgamation of code that was copied from SQLite3. The license of that code is the same as the license of SQLite3.
# Author
Yasuhiro Matsumoto (a.k.a mattn)
G.J.R. Timmer

85
vendor/github.com/mattn/go-sqlite3/backup.go generated vendored Normal file
View file

@ -0,0 +1,85 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package sqlite3
/*
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
*/
import "C"
import (
"runtime"
"unsafe"
)
// SQLiteBackup implement interface of Backup.
type SQLiteBackup struct {
b *C.sqlite3_backup
}
// Backup make backup from src to dest.
func (destConn *SQLiteConn) Backup(dest string, srcConn *SQLiteConn, src string) (*SQLiteBackup, error) {
destptr := C.CString(dest)
defer C.free(unsafe.Pointer(destptr))
srcptr := C.CString(src)
defer C.free(unsafe.Pointer(srcptr))
if b := C.sqlite3_backup_init(destConn.db, destptr, srcConn.db, srcptr); b != nil {
bb := &SQLiteBackup{b: b}
runtime.SetFinalizer(bb, (*SQLiteBackup).Finish)
return bb, nil
}
return nil, destConn.lastError()
}
// Step to backs up for one step. Calls the underlying `sqlite3_backup_step`
// function. This function returns a boolean indicating if the backup is done
// and an error signalling any other error. Done is returned if the underlying
// C function returns SQLITE_DONE (Code 101)
func (b *SQLiteBackup) Step(p int) (bool, error) {
ret := C.sqlite3_backup_step(b.b, C.int(p))
if ret == C.SQLITE_DONE {
return true, nil
} else if ret != 0 && ret != C.SQLITE_LOCKED && ret != C.SQLITE_BUSY {
return false, Error{Code: ErrNo(ret)}
}
return false, nil
}
// Remaining return whether have the rest for backup.
func (b *SQLiteBackup) Remaining() int {
return int(C.sqlite3_backup_remaining(b.b))
}
// PageCount return count of pages.
func (b *SQLiteBackup) PageCount() int {
return int(C.sqlite3_backup_pagecount(b.b))
}
// Finish close backup.
func (b *SQLiteBackup) Finish() error {
return b.Close()
}
// Close close backup.
func (b *SQLiteBackup) Close() error {
ret := C.sqlite3_backup_finish(b.b)
// sqlite3_backup_finish() never fails, it just returns the
// error code from previous operations, so clean up before
// checking and returning an error
b.b = nil
runtime.SetFinalizer(b, nil)
if ret != 0 {
return Error{Code: ErrNo(ret)}
}
return nil
}

392
vendor/github.com/mattn/go-sqlite3/callback.go generated vendored Normal file
View file

@ -0,0 +1,392 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package sqlite3
// You can't export a Go function to C and have definitions in the C
// preamble in the same file, so we have to have callbackTrampoline in
// its own file. Because we need a separate file anyway, the support
// code for SQLite custom functions is in here.
/*
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
void _sqlite3_result_text(sqlite3_context* ctx, const char* s);
void _sqlite3_result_blob(sqlite3_context* ctx, const void* b, int l);
*/
import "C"
import (
"errors"
"fmt"
"math"
"reflect"
"sync"
"unsafe"
)
//export callbackTrampoline
func callbackTrampoline(ctx *C.sqlite3_context, argc int, argv **C.sqlite3_value) {
args := (*[(math.MaxInt32 - 1) / unsafe.Sizeof((*C.sqlite3_value)(nil))]*C.sqlite3_value)(unsafe.Pointer(argv))[:argc:argc]
fi := lookupHandle(C.sqlite3_user_data(ctx)).(*functionInfo)
fi.Call(ctx, args)
}
//export stepTrampoline
func stepTrampoline(ctx *C.sqlite3_context, argc C.int, argv **C.sqlite3_value) {
args := (*[(math.MaxInt32 - 1) / unsafe.Sizeof((*C.sqlite3_value)(nil))]*C.sqlite3_value)(unsafe.Pointer(argv))[:int(argc):int(argc)]
ai := lookupHandle(C.sqlite3_user_data(ctx)).(*aggInfo)
ai.Step(ctx, args)
}
//export doneTrampoline
func doneTrampoline(ctx *C.sqlite3_context) {
ai := lookupHandle(C.sqlite3_user_data(ctx)).(*aggInfo)
ai.Done(ctx)
}
//export compareTrampoline
func compareTrampoline(handlePtr unsafe.Pointer, la C.int, a *C.char, lb C.int, b *C.char) C.int {
cmp := lookupHandle(handlePtr).(func(string, string) int)
return C.int(cmp(C.GoStringN(a, la), C.GoStringN(b, lb)))
}
//export commitHookTrampoline
func commitHookTrampoline(handle unsafe.Pointer) int {
callback := lookupHandle(handle).(func() int)
return callback()
}
//export rollbackHookTrampoline
func rollbackHookTrampoline(handle unsafe.Pointer) {
callback := lookupHandle(handle).(func())
callback()
}
//export updateHookTrampoline
func updateHookTrampoline(handle unsafe.Pointer, op int, db *C.char, table *C.char, rowid int64) {
callback := lookupHandle(handle).(func(int, string, string, int64))
callback(op, C.GoString(db), C.GoString(table), rowid)
}
//export authorizerTrampoline
func authorizerTrampoline(handle unsafe.Pointer, op int, arg1 *C.char, arg2 *C.char, arg3 *C.char) int {
callback := lookupHandle(handle).(func(int, string, string, string) int)
return callback(op, C.GoString(arg1), C.GoString(arg2), C.GoString(arg3))
}
//export preUpdateHookTrampoline
func preUpdateHookTrampoline(handle unsafe.Pointer, dbHandle uintptr, op int, db *C.char, table *C.char, oldrowid int64, newrowid int64) {
hval := lookupHandleVal(handle)
data := SQLitePreUpdateData{
Conn: hval.db,
Op: op,
DatabaseName: C.GoString(db),
TableName: C.GoString(table),
OldRowID: oldrowid,
NewRowID: newrowid,
}
callback := hval.val.(func(SQLitePreUpdateData))
callback(data)
}
// Use handles to avoid passing Go pointers to C.
type handleVal struct {
db *SQLiteConn
val interface{}
}
var handleLock sync.Mutex
var handleVals = make(map[unsafe.Pointer]handleVal)
func newHandle(db *SQLiteConn, v interface{}) unsafe.Pointer {
handleLock.Lock()
defer handleLock.Unlock()
val := handleVal{db: db, val: v}
var p unsafe.Pointer = C.malloc(C.size_t(1))
if p == nil {
panic("can't allocate 'cgo-pointer hack index pointer': ptr == nil")
}
handleVals[p] = val
return p
}
func lookupHandleVal(handle unsafe.Pointer) handleVal {
handleLock.Lock()
defer handleLock.Unlock()
return handleVals[handle]
}
func lookupHandle(handle unsafe.Pointer) interface{} {
return lookupHandleVal(handle).val
}
func deleteHandles(db *SQLiteConn) {
handleLock.Lock()
defer handleLock.Unlock()
for handle, val := range handleVals {
if val.db == db {
delete(handleVals, handle)
C.free(handle)
}
}
}
// This is only here so that tests can refer to it.
type callbackArgRaw C.sqlite3_value
type callbackArgConverter func(*C.sqlite3_value) (reflect.Value, error)
type callbackArgCast struct {
f callbackArgConverter
typ reflect.Type
}
func (c callbackArgCast) Run(v *C.sqlite3_value) (reflect.Value, error) {
val, err := c.f(v)
if err != nil {
return reflect.Value{}, err
}
if !val.Type().ConvertibleTo(c.typ) {
return reflect.Value{}, fmt.Errorf("cannot convert %s to %s", val.Type(), c.typ)
}
return val.Convert(c.typ), nil
}
func callbackArgInt64(v *C.sqlite3_value) (reflect.Value, error) {
if C.sqlite3_value_type(v) != C.SQLITE_INTEGER {
return reflect.Value{}, fmt.Errorf("argument must be an INTEGER")
}
return reflect.ValueOf(int64(C.sqlite3_value_int64(v))), nil
}
func callbackArgBool(v *C.sqlite3_value) (reflect.Value, error) {
if C.sqlite3_value_type(v) != C.SQLITE_INTEGER {
return reflect.Value{}, fmt.Errorf("argument must be an INTEGER")
}
i := int64(C.sqlite3_value_int64(v))
val := false
if i != 0 {
val = true
}
return reflect.ValueOf(val), nil
}
func callbackArgFloat64(v *C.sqlite3_value) (reflect.Value, error) {
if C.sqlite3_value_type(v) != C.SQLITE_FLOAT {
return reflect.Value{}, fmt.Errorf("argument must be a FLOAT")
}
return reflect.ValueOf(float64(C.sqlite3_value_double(v))), nil
}
func callbackArgBytes(v *C.sqlite3_value) (reflect.Value, error) {
switch C.sqlite3_value_type(v) {
case C.SQLITE_BLOB:
l := C.sqlite3_value_bytes(v)
p := C.sqlite3_value_blob(v)
return reflect.ValueOf(C.GoBytes(p, l)), nil
case C.SQLITE_TEXT:
l := C.sqlite3_value_bytes(v)
c := unsafe.Pointer(C.sqlite3_value_text(v))
return reflect.ValueOf(C.GoBytes(c, l)), nil
default:
return reflect.Value{}, fmt.Errorf("argument must be BLOB or TEXT")
}
}
func callbackArgString(v *C.sqlite3_value) (reflect.Value, error) {
switch C.sqlite3_value_type(v) {
case C.SQLITE_BLOB:
l := C.sqlite3_value_bytes(v)
p := (*C.char)(C.sqlite3_value_blob(v))
return reflect.ValueOf(C.GoStringN(p, l)), nil
case C.SQLITE_TEXT:
c := (*C.char)(unsafe.Pointer(C.sqlite3_value_text(v)))
return reflect.ValueOf(C.GoString(c)), nil
default:
return reflect.Value{}, fmt.Errorf("argument must be BLOB or TEXT")
}
}
func callbackArgGeneric(v *C.sqlite3_value) (reflect.Value, error) {
switch C.sqlite3_value_type(v) {
case C.SQLITE_INTEGER:
return callbackArgInt64(v)
case C.SQLITE_FLOAT:
return callbackArgFloat64(v)
case C.SQLITE_TEXT:
return callbackArgString(v)
case C.SQLITE_BLOB:
return callbackArgBytes(v)
case C.SQLITE_NULL:
// Interpret NULL as a nil byte slice.
var ret []byte
return reflect.ValueOf(ret), nil
default:
panic("unreachable")
}
}
func callbackArg(typ reflect.Type) (callbackArgConverter, error) {
switch typ.Kind() {
case reflect.Interface:
if typ.NumMethod() != 0 {
return nil, errors.New("the only supported interface type is interface{}")
}
return callbackArgGeneric, nil
case reflect.Slice:
if typ.Elem().Kind() != reflect.Uint8 {
return nil, errors.New("the only supported slice type is []byte")
}
return callbackArgBytes, nil
case reflect.String:
return callbackArgString, nil
case reflect.Bool:
return callbackArgBool, nil
case reflect.Int64:
return callbackArgInt64, nil
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Int, reflect.Uint:
c := callbackArgCast{callbackArgInt64, typ}
return c.Run, nil
case reflect.Float64:
return callbackArgFloat64, nil
case reflect.Float32:
c := callbackArgCast{callbackArgFloat64, typ}
return c.Run, nil
default:
return nil, fmt.Errorf("don't know how to convert to %s", typ)
}
}
func callbackConvertArgs(argv []*C.sqlite3_value, converters []callbackArgConverter, variadic callbackArgConverter) ([]reflect.Value, error) {
var args []reflect.Value
if len(argv) < len(converters) {
return nil, fmt.Errorf("function requires at least %d arguments", len(converters))
}
for i, arg := range argv[:len(converters)] {
v, err := converters[i](arg)
if err != nil {
return nil, err
}
args = append(args, v)
}
if variadic != nil {
for _, arg := range argv[len(converters):] {
v, err := variadic(arg)
if err != nil {
return nil, err
}
args = append(args, v)
}
}
return args, nil
}
type callbackRetConverter func(*C.sqlite3_context, reflect.Value) error
func callbackRetInteger(ctx *C.sqlite3_context, v reflect.Value) error {
switch v.Type().Kind() {
case reflect.Int64:
case reflect.Int8, reflect.Int16, reflect.Int32, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Int, reflect.Uint:
v = v.Convert(reflect.TypeOf(int64(0)))
case reflect.Bool:
b := v.Interface().(bool)
if b {
v = reflect.ValueOf(int64(1))
} else {
v = reflect.ValueOf(int64(0))
}
default:
return fmt.Errorf("cannot convert %s to INTEGER", v.Type())
}
C.sqlite3_result_int64(ctx, C.sqlite3_int64(v.Interface().(int64)))
return nil
}
func callbackRetFloat(ctx *C.sqlite3_context, v reflect.Value) error {
switch v.Type().Kind() {
case reflect.Float64:
case reflect.Float32:
v = v.Convert(reflect.TypeOf(float64(0)))
default:
return fmt.Errorf("cannot convert %s to FLOAT", v.Type())
}
C.sqlite3_result_double(ctx, C.double(v.Interface().(float64)))
return nil
}
func callbackRetBlob(ctx *C.sqlite3_context, v reflect.Value) error {
if v.Type().Kind() != reflect.Slice || v.Type().Elem().Kind() != reflect.Uint8 {
return fmt.Errorf("cannot convert %s to BLOB", v.Type())
}
i := v.Interface()
if i == nil || len(i.([]byte)) == 0 {
C.sqlite3_result_null(ctx)
} else {
bs := i.([]byte)
C._sqlite3_result_blob(ctx, unsafe.Pointer(&bs[0]), C.int(len(bs)))
}
return nil
}
func callbackRetText(ctx *C.sqlite3_context, v reflect.Value) error {
if v.Type().Kind() != reflect.String {
return fmt.Errorf("cannot convert %s to TEXT", v.Type())
}
C._sqlite3_result_text(ctx, C.CString(v.Interface().(string)))
return nil
}
func callbackRetNil(ctx *C.sqlite3_context, v reflect.Value) error {
return nil
}
func callbackRet(typ reflect.Type) (callbackRetConverter, error) {
switch typ.Kind() {
case reflect.Interface:
errorInterface := reflect.TypeOf((*error)(nil)).Elem()
if typ.Implements(errorInterface) {
return callbackRetNil, nil
}
fallthrough
case reflect.Slice:
if typ.Elem().Kind() != reflect.Uint8 {
return nil, errors.New("the only supported slice type is []byte")
}
return callbackRetBlob, nil
case reflect.String:
return callbackRetText, nil
case reflect.Bool, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Int, reflect.Uint:
return callbackRetInteger, nil
case reflect.Float32, reflect.Float64:
return callbackRetFloat, nil
default:
return nil, fmt.Errorf("don't know how to convert to %s", typ)
}
}
func callbackError(ctx *C.sqlite3_context, err error) {
cstr := C.CString(err.Error())
defer C.free(unsafe.Pointer(cstr))
C.sqlite3_result_error(ctx, cstr, C.int(-1))
}
// Test support code. Tests are not allowed to import "C", so we can't
// declare any functions that use C.sqlite3_value.
func callbackSyntheticForTests(v reflect.Value, err error) callbackArgConverter {
return func(*C.sqlite3_value) (reflect.Value, error) {
return v, err
}
}

299
vendor/github.com/mattn/go-sqlite3/convert.go generated vendored Normal file
View file

@ -0,0 +1,299 @@
// Extracted from Go database/sql source code
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Type conversions for Scan.
package sqlite3
import (
"database/sql"
"database/sql/driver"
"errors"
"fmt"
"reflect"
"strconv"
"time"
)
var errNilPtr = errors.New("destination pointer is nil") // embedded in descriptive error
// convertAssign copies to dest the value in src, converting it if possible.
// An error is returned if the copy would result in loss of information.
// dest should be a pointer type.
func convertAssign(dest, src interface{}) error {
// Common cases, without reflect.
switch s := src.(type) {
case string:
switch d := dest.(type) {
case *string:
if d == nil {
return errNilPtr
}
*d = s
return nil
case *[]byte:
if d == nil {
return errNilPtr
}
*d = []byte(s)
return nil
case *sql.RawBytes:
if d == nil {
return errNilPtr
}
*d = append((*d)[:0], s...)
return nil
}
case []byte:
switch d := dest.(type) {
case *string:
if d == nil {
return errNilPtr
}
*d = string(s)
return nil
case *interface{}:
if d == nil {
return errNilPtr
}
*d = cloneBytes(s)
return nil
case *[]byte:
if d == nil {
return errNilPtr
}
*d = cloneBytes(s)
return nil
case *sql.RawBytes:
if d == nil {
return errNilPtr
}
*d = s
return nil
}
case time.Time:
switch d := dest.(type) {
case *time.Time:
*d = s
return nil
case *string:
*d = s.Format(time.RFC3339Nano)
return nil
case *[]byte:
if d == nil {
return errNilPtr
}
*d = []byte(s.Format(time.RFC3339Nano))
return nil
case *sql.RawBytes:
if d == nil {
return errNilPtr
}
*d = s.AppendFormat((*d)[:0], time.RFC3339Nano)
return nil
}
case nil:
switch d := dest.(type) {
case *interface{}:
if d == nil {
return errNilPtr
}
*d = nil
return nil
case *[]byte:
if d == nil {
return errNilPtr
}
*d = nil
return nil
case *sql.RawBytes:
if d == nil {
return errNilPtr
}
*d = nil
return nil
}
}
var sv reflect.Value
switch d := dest.(type) {
case *string:
sv = reflect.ValueOf(src)
switch sv.Kind() {
case reflect.Bool,
reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64,
reflect.Float32, reflect.Float64:
*d = asString(src)
return nil
}
case *[]byte:
sv = reflect.ValueOf(src)
if b, ok := asBytes(nil, sv); ok {
*d = b
return nil
}
case *sql.RawBytes:
sv = reflect.ValueOf(src)
if b, ok := asBytes([]byte(*d)[:0], sv); ok {
*d = sql.RawBytes(b)
return nil
}
case *bool:
bv, err := driver.Bool.ConvertValue(src)
if err == nil {
*d = bv.(bool)
}
return err
case *interface{}:
*d = src
return nil
}
if scanner, ok := dest.(sql.Scanner); ok {
return scanner.Scan(src)
}
dpv := reflect.ValueOf(dest)
if dpv.Kind() != reflect.Ptr {
return errors.New("destination not a pointer")
}
if dpv.IsNil() {
return errNilPtr
}
if !sv.IsValid() {
sv = reflect.ValueOf(src)
}
dv := reflect.Indirect(dpv)
if sv.IsValid() && sv.Type().AssignableTo(dv.Type()) {
switch b := src.(type) {
case []byte:
dv.Set(reflect.ValueOf(cloneBytes(b)))
default:
dv.Set(sv)
}
return nil
}
if dv.Kind() == sv.Kind() && sv.Type().ConvertibleTo(dv.Type()) {
dv.Set(sv.Convert(dv.Type()))
return nil
}
// The following conversions use a string value as an intermediate representation
// to convert between various numeric types.
//
// This also allows scanning into user defined types such as "type Int int64".
// For symmetry, also check for string destination types.
switch dv.Kind() {
case reflect.Ptr:
if src == nil {
dv.Set(reflect.Zero(dv.Type()))
return nil
}
dv.Set(reflect.New(dv.Type().Elem()))
return convertAssign(dv.Interface(), src)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
s := asString(src)
i64, err := strconv.ParseInt(s, 10, dv.Type().Bits())
if err != nil {
err = strconvErr(err)
return fmt.Errorf("converting driver.Value type %T (%q) to a %s: %v", src, s, dv.Kind(), err)
}
dv.SetInt(i64)
return nil
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
s := asString(src)
u64, err := strconv.ParseUint(s, 10, dv.Type().Bits())
if err != nil {
err = strconvErr(err)
return fmt.Errorf("converting driver.Value type %T (%q) to a %s: %v", src, s, dv.Kind(), err)
}
dv.SetUint(u64)
return nil
case reflect.Float32, reflect.Float64:
s := asString(src)
f64, err := strconv.ParseFloat(s, dv.Type().Bits())
if err != nil {
err = strconvErr(err)
return fmt.Errorf("converting driver.Value type %T (%q) to a %s: %v", src, s, dv.Kind(), err)
}
dv.SetFloat(f64)
return nil
case reflect.String:
switch v := src.(type) {
case string:
dv.SetString(v)
return nil
case []byte:
dv.SetString(string(v))
return nil
}
}
return fmt.Errorf("unsupported Scan, storing driver.Value type %T into type %T", src, dest)
}
func strconvErr(err error) error {
if ne, ok := err.(*strconv.NumError); ok {
return ne.Err
}
return err
}
func cloneBytes(b []byte) []byte {
if b == nil {
return nil
}
c := make([]byte, len(b))
copy(c, b)
return c
}
func asString(src interface{}) string {
switch v := src.(type) {
case string:
return v
case []byte:
return string(v)
}
rv := reflect.ValueOf(src)
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return strconv.FormatInt(rv.Int(), 10)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return strconv.FormatUint(rv.Uint(), 10)
case reflect.Float64:
return strconv.FormatFloat(rv.Float(), 'g', -1, 64)
case reflect.Float32:
return strconv.FormatFloat(rv.Float(), 'g', -1, 32)
case reflect.Bool:
return strconv.FormatBool(rv.Bool())
}
return fmt.Sprintf("%v", src)
}
func asBytes(buf []byte, rv reflect.Value) (b []byte, ok bool) {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return strconv.AppendInt(buf, rv.Int(), 10), true
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return strconv.AppendUint(buf, rv.Uint(), 10), true
case reflect.Float32:
return strconv.AppendFloat(buf, rv.Float(), 'g', -1, 32), true
case reflect.Float64:
return strconv.AppendFloat(buf, rv.Float(), 'g', -1, 64), true
case reflect.Bool:
return strconv.AppendBool(buf, rv.Bool()), true
case reflect.String:
s := rv.String()
return append(buf, s...), true
}
return
}

135
vendor/github.com/mattn/go-sqlite3/doc.go generated vendored Normal file
View file

@ -0,0 +1,135 @@
/*
Package sqlite3 provides interface to SQLite3 databases.
This works as a driver for database/sql.
Installation
go get github.com/mattn/go-sqlite3
Supported Types
Currently, go-sqlite3 supports the following data types.
+------------------------------+
|go | sqlite3 |
|----------|-------------------|
|nil | null |
|int | integer |
|int64 | integer |
|float64 | float |
|bool | integer |
|[]byte | blob |
|string | text |
|time.Time | timestamp/datetime|
+------------------------------+
SQLite3 Extension
You can write your own extension module for sqlite3. For example, below is an
extension for a Regexp matcher operation.
#include <pcre.h>
#include <string.h>
#include <stdio.h>
#include <sqlite3ext.h>
SQLITE_EXTENSION_INIT1
static void regexp_func(sqlite3_context *context, int argc, sqlite3_value **argv) {
if (argc >= 2) {
const char *target = (const char *)sqlite3_value_text(argv[1]);
const char *pattern = (const char *)sqlite3_value_text(argv[0]);
const char* errstr = NULL;
int erroff = 0;
int vec[500];
int n, rc;
pcre* re = pcre_compile(pattern, 0, &errstr, &erroff, NULL);
rc = pcre_exec(re, NULL, target, strlen(target), 0, 0, vec, 500);
if (rc <= 0) {
sqlite3_result_error(context, errstr, 0);
return;
}
sqlite3_result_int(context, 1);
}
}
#ifdef _WIN32
__declspec(dllexport)
#endif
int sqlite3_extension_init(sqlite3 *db, char **errmsg,
const sqlite3_api_routines *api) {
SQLITE_EXTENSION_INIT2(api);
return sqlite3_create_function(db, "regexp", 2, SQLITE_UTF8,
(void*)db, regexp_func, NULL, NULL);
}
It needs to be built as a so/dll shared library. And you need to register
the extension module like below.
sql.Register("sqlite3_with_extensions",
&sqlite3.SQLiteDriver{
Extensions: []string{
"sqlite3_mod_regexp",
},
})
Then, you can use this extension.
rows, err := db.Query("select text from mytable where name regexp '^golang'")
Connection Hook
You can hook and inject your code when the connection is established by setting
ConnectHook to get the SQLiteConn.
sql.Register("sqlite3_with_hook_example",
&sqlite3.SQLiteDriver{
ConnectHook: func(conn *sqlite3.SQLiteConn) error {
sqlite3conn = append(sqlite3conn, conn)
return nil
},
})
You can also use database/sql.Conn.Raw (Go >= 1.13):
conn, err := db.Conn(context.Background())
// if err != nil { ... }
defer conn.Close()
err = conn.Raw(func (driverConn interface{}) error {
sqliteConn := driverConn.(*sqlite3.SQLiteConn)
// ... use sqliteConn
})
// if err != nil { ... }
Go SQlite3 Extensions
If you want to register Go functions as SQLite extension functions
you can make a custom driver by calling RegisterFunction from
ConnectHook.
regex = func(re, s string) (bool, error) {
return regexp.MatchString(re, s)
}
sql.Register("sqlite3_extended",
&sqlite3.SQLiteDriver{
ConnectHook: func(conn *sqlite3.SQLiteConn) error {
return conn.RegisterFunc("regexp", regex, true)
},
})
You can then use the custom driver by passing its name to sql.Open.
var i int
conn, err := sql.Open("sqlite3_extended", "./foo.db")
if err != nil {
panic(err)
}
err = db.QueryRow(`SELECT regexp("foo.*", "seafood")`).Scan(&i)
if err != nil {
panic(err)
}
See the documentation of RegisterFunc for more details.
*/
package sqlite3

150
vendor/github.com/mattn/go-sqlite3/error.go generated vendored Normal file
View file

@ -0,0 +1,150 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package sqlite3
/*
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
*/
import "C"
import "syscall"
// ErrNo inherit errno.
type ErrNo int
// ErrNoMask is mask code.
const ErrNoMask C.int = 0xff
// ErrNoExtended is extended errno.
type ErrNoExtended int
// Error implement sqlite error code.
type Error struct {
Code ErrNo /* The error code returned by SQLite */
ExtendedCode ErrNoExtended /* The extended error code returned by SQLite */
SystemErrno syscall.Errno /* The system errno returned by the OS through SQLite, if applicable */
err string /* The error string returned by sqlite3_errmsg(),
this usually contains more specific details. */
}
// result codes from http://www.sqlite.org/c3ref/c_abort.html
var (
ErrError = ErrNo(1) /* SQL error or missing database */
ErrInternal = ErrNo(2) /* Internal logic error in SQLite */
ErrPerm = ErrNo(3) /* Access permission denied */
ErrAbort = ErrNo(4) /* Callback routine requested an abort */
ErrBusy = ErrNo(5) /* The database file is locked */
ErrLocked = ErrNo(6) /* A table in the database is locked */
ErrNomem = ErrNo(7) /* A malloc() failed */
ErrReadonly = ErrNo(8) /* Attempt to write a readonly database */
ErrInterrupt = ErrNo(9) /* Operation terminated by sqlite3_interrupt() */
ErrIoErr = ErrNo(10) /* Some kind of disk I/O error occurred */
ErrCorrupt = ErrNo(11) /* The database disk image is malformed */
ErrNotFound = ErrNo(12) /* Unknown opcode in sqlite3_file_control() */
ErrFull = ErrNo(13) /* Insertion failed because database is full */
ErrCantOpen = ErrNo(14) /* Unable to open the database file */
ErrProtocol = ErrNo(15) /* Database lock protocol error */
ErrEmpty = ErrNo(16) /* Database is empty */
ErrSchema = ErrNo(17) /* The database schema changed */
ErrTooBig = ErrNo(18) /* String or BLOB exceeds size limit */
ErrConstraint = ErrNo(19) /* Abort due to constraint violation */
ErrMismatch = ErrNo(20) /* Data type mismatch */
ErrMisuse = ErrNo(21) /* Library used incorrectly */
ErrNoLFS = ErrNo(22) /* Uses OS features not supported on host */
ErrAuth = ErrNo(23) /* Authorization denied */
ErrFormat = ErrNo(24) /* Auxiliary database format error */
ErrRange = ErrNo(25) /* 2nd parameter to sqlite3_bind out of range */
ErrNotADB = ErrNo(26) /* File opened that is not a database file */
ErrNotice = ErrNo(27) /* Notifications from sqlite3_log() */
ErrWarning = ErrNo(28) /* Warnings from sqlite3_log() */
)
// Error return error message from errno.
func (err ErrNo) Error() string {
return Error{Code: err}.Error()
}
// Extend return extended errno.
func (err ErrNo) Extend(by int) ErrNoExtended {
return ErrNoExtended(int(err) | (by << 8))
}
// Error return error message that is extended code.
func (err ErrNoExtended) Error() string {
return Error{Code: ErrNo(C.int(err) & ErrNoMask), ExtendedCode: err}.Error()
}
func (err Error) Error() string {
var str string
if err.err != "" {
str = err.err
} else {
str = C.GoString(C.sqlite3_errstr(C.int(err.Code)))
}
if err.SystemErrno != 0 {
str += ": " + err.SystemErrno.Error()
}
return str
}
// result codes from http://www.sqlite.org/c3ref/c_abort_rollback.html
var (
ErrIoErrRead = ErrIoErr.Extend(1)
ErrIoErrShortRead = ErrIoErr.Extend(2)
ErrIoErrWrite = ErrIoErr.Extend(3)
ErrIoErrFsync = ErrIoErr.Extend(4)
ErrIoErrDirFsync = ErrIoErr.Extend(5)
ErrIoErrTruncate = ErrIoErr.Extend(6)
ErrIoErrFstat = ErrIoErr.Extend(7)
ErrIoErrUnlock = ErrIoErr.Extend(8)
ErrIoErrRDlock = ErrIoErr.Extend(9)
ErrIoErrDelete = ErrIoErr.Extend(10)
ErrIoErrBlocked = ErrIoErr.Extend(11)
ErrIoErrNoMem = ErrIoErr.Extend(12)
ErrIoErrAccess = ErrIoErr.Extend(13)
ErrIoErrCheckReservedLock = ErrIoErr.Extend(14)
ErrIoErrLock = ErrIoErr.Extend(15)
ErrIoErrClose = ErrIoErr.Extend(16)
ErrIoErrDirClose = ErrIoErr.Extend(17)
ErrIoErrSHMOpen = ErrIoErr.Extend(18)
ErrIoErrSHMSize = ErrIoErr.Extend(19)
ErrIoErrSHMLock = ErrIoErr.Extend(20)
ErrIoErrSHMMap = ErrIoErr.Extend(21)
ErrIoErrSeek = ErrIoErr.Extend(22)
ErrIoErrDeleteNoent = ErrIoErr.Extend(23)
ErrIoErrMMap = ErrIoErr.Extend(24)
ErrIoErrGetTempPath = ErrIoErr.Extend(25)
ErrIoErrConvPath = ErrIoErr.Extend(26)
ErrLockedSharedCache = ErrLocked.Extend(1)
ErrBusyRecovery = ErrBusy.Extend(1)
ErrBusySnapshot = ErrBusy.Extend(2)
ErrCantOpenNoTempDir = ErrCantOpen.Extend(1)
ErrCantOpenIsDir = ErrCantOpen.Extend(2)
ErrCantOpenFullPath = ErrCantOpen.Extend(3)
ErrCantOpenConvPath = ErrCantOpen.Extend(4)
ErrCorruptVTab = ErrCorrupt.Extend(1)
ErrReadonlyRecovery = ErrReadonly.Extend(1)
ErrReadonlyCantLock = ErrReadonly.Extend(2)
ErrReadonlyRollback = ErrReadonly.Extend(3)
ErrReadonlyDbMoved = ErrReadonly.Extend(4)
ErrAbortRollback = ErrAbort.Extend(2)
ErrConstraintCheck = ErrConstraint.Extend(1)
ErrConstraintCommitHook = ErrConstraint.Extend(2)
ErrConstraintForeignKey = ErrConstraint.Extend(3)
ErrConstraintFunction = ErrConstraint.Extend(4)
ErrConstraintNotNull = ErrConstraint.Extend(5)
ErrConstraintPrimaryKey = ErrConstraint.Extend(6)
ErrConstraintTrigger = ErrConstraint.Extend(7)
ErrConstraintUnique = ErrConstraint.Extend(8)
ErrConstraintVTab = ErrConstraint.Extend(9)
ErrConstraintRowID = ErrConstraint.Extend(10)
ErrNoticeRecoverWAL = ErrNotice.Extend(1)
ErrNoticeRecoverRollback = ErrNotice.Extend(2)
ErrWarningAutoIndex = ErrWarning.Extend(1)
)

3
vendor/github.com/mattn/go-sqlite3/go.mod generated vendored Normal file
View file

@ -0,0 +1,3 @@
module github.com/mattn/go-sqlite3
go 1.10

0
vendor/github.com/mattn/go-sqlite3/go.sum generated vendored Normal file
View file

230877
vendor/github.com/mattn/go-sqlite3/sqlite3-binding.c generated vendored Normal file

File diff suppressed because it is too large Load diff

12275
vendor/github.com/mattn/go-sqlite3/sqlite3-binding.h generated vendored Normal file

File diff suppressed because it is too large Load diff

2157
vendor/github.com/mattn/go-sqlite3/sqlite3.go generated vendored Normal file

File diff suppressed because it is too large Load diff

103
vendor/github.com/mattn/go-sqlite3/sqlite3_context.go generated vendored Normal file
View file

@ -0,0 +1,103 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package sqlite3
/*
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
// These wrappers are necessary because SQLITE_TRANSIENT
// is a pointer constant, and cgo doesn't translate them correctly.
static inline void my_result_text(sqlite3_context *ctx, char *p, int np) {
sqlite3_result_text(ctx, p, np, SQLITE_TRANSIENT);
}
static inline void my_result_blob(sqlite3_context *ctx, void *p, int np) {
sqlite3_result_blob(ctx, p, np, SQLITE_TRANSIENT);
}
*/
import "C"
import (
"math"
"reflect"
"unsafe"
)
const i64 = unsafe.Sizeof(int(0)) > 4
// SQLiteContext behave sqlite3_context
type SQLiteContext C.sqlite3_context
// ResultBool sets the result of an SQL function.
func (c *SQLiteContext) ResultBool(b bool) {
if b {
c.ResultInt(1)
} else {
c.ResultInt(0)
}
}
// ResultBlob sets the result of an SQL function.
// See: sqlite3_result_blob, http://sqlite.org/c3ref/result_blob.html
func (c *SQLiteContext) ResultBlob(b []byte) {
if i64 && len(b) > math.MaxInt32 {
C.sqlite3_result_error_toobig((*C.sqlite3_context)(c))
return
}
var p *byte
if len(b) > 0 {
p = &b[0]
}
C.my_result_blob((*C.sqlite3_context)(c), unsafe.Pointer(p), C.int(len(b)))
}
// ResultDouble sets the result of an SQL function.
// See: sqlite3_result_double, http://sqlite.org/c3ref/result_blob.html
func (c *SQLiteContext) ResultDouble(d float64) {
C.sqlite3_result_double((*C.sqlite3_context)(c), C.double(d))
}
// ResultInt sets the result of an SQL function.
// See: sqlite3_result_int, http://sqlite.org/c3ref/result_blob.html
func (c *SQLiteContext) ResultInt(i int) {
if i64 && (i > math.MaxInt32 || i < math.MinInt32) {
C.sqlite3_result_int64((*C.sqlite3_context)(c), C.sqlite3_int64(i))
} else {
C.sqlite3_result_int((*C.sqlite3_context)(c), C.int(i))
}
}
// ResultInt64 sets the result of an SQL function.
// See: sqlite3_result_int64, http://sqlite.org/c3ref/result_blob.html
func (c *SQLiteContext) ResultInt64(i int64) {
C.sqlite3_result_int64((*C.sqlite3_context)(c), C.sqlite3_int64(i))
}
// ResultNull sets the result of an SQL function.
// See: sqlite3_result_null, http://sqlite.org/c3ref/result_blob.html
func (c *SQLiteContext) ResultNull() {
C.sqlite3_result_null((*C.sqlite3_context)(c))
}
// ResultText sets the result of an SQL function.
// See: sqlite3_result_text, http://sqlite.org/c3ref/result_blob.html
func (c *SQLiteContext) ResultText(s string) {
h := (*reflect.StringHeader)(unsafe.Pointer(&s))
cs, l := (*C.char)(unsafe.Pointer(h.Data)), C.int(h.Len)
C.my_result_text((*C.sqlite3_context)(c), cs, l)
}
// ResultZeroblob sets the result of an SQL function.
// See: sqlite3_result_zeroblob, http://sqlite.org/c3ref/result_blob.html
func (c *SQLiteContext) ResultZeroblob(n int) {
C.sqlite3_result_zeroblob((*C.sqlite3_context)(c), C.int(n))
}

View file

@ -0,0 +1,120 @@
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package sqlite3
import (
"crypto/sha1"
"crypto/sha256"
"crypto/sha512"
)
// This file provides several different implementations for the
// default embedded sqlite_crypt function.
// This function is uses a caesar-cypher by default
// and is used within the UserAuthentication module to encode
// the password.
//
// The provided functions can be used as an overload to the sqlite_crypt
// function through the use of the RegisterFunc on the connection.
//
// Because the functions can serv a purpose to an end-user
// without using the UserAuthentication module
// the functions are default compiled in.
//
// From SQLITE3 - user-auth.txt
// The sqlite_user.pw field is encoded by a built-in SQL function
// "sqlite_crypt(X,Y)". The two arguments are both BLOBs. The first argument
// is the plaintext password supplied to the sqlite3_user_authenticate()
// interface. The second argument is the sqlite_user.pw value and is supplied
// so that the function can extract the "salt" used by the password encoder.
// The result of sqlite_crypt(X,Y) is another blob which is the value that
// ends up being stored in sqlite_user.pw. To verify credentials X supplied
// by the sqlite3_user_authenticate() routine, SQLite runs:
//
// sqlite_user.pw == sqlite_crypt(X, sqlite_user.pw)
//
// To compute an appropriate sqlite_user.pw value from a new or modified
// password X, sqlite_crypt(X,NULL) is run. A new random salt is selected
// when the second argument is NULL.
//
// The built-in version of of sqlite_crypt() uses a simple Caesar-cypher
// which prevents passwords from being revealed by searching the raw database
// for ASCII text, but is otherwise trivally broken. For better password
// security, the database should be encrypted using the SQLite Encryption
// Extension or similar technology. Or, the application can use the
// sqlite3_create_function() interface to provide an alternative
// implementation of sqlite_crypt() that computes a stronger password hash,
// perhaps using a cryptographic hash function like SHA1.
// CryptEncoderSHA1 encodes a password with SHA1
func CryptEncoderSHA1(pass []byte, hash interface{}) []byte {
h := sha1.Sum(pass)
return h[:]
}
// CryptEncoderSSHA1 encodes a password with SHA1 with the
// configured salt.
func CryptEncoderSSHA1(salt string) func(pass []byte, hash interface{}) []byte {
return func(pass []byte, hash interface{}) []byte {
s := []byte(salt)
p := append(pass, s...)
h := sha1.Sum(p)
return h[:]
}
}
// CryptEncoderSHA256 encodes a password with SHA256
func CryptEncoderSHA256(pass []byte, hash interface{}) []byte {
h := sha256.Sum256(pass)
return h[:]
}
// CryptEncoderSSHA256 encodes a password with SHA256
// with the configured salt
func CryptEncoderSSHA256(salt string) func(pass []byte, hash interface{}) []byte {
return func(pass []byte, hash interface{}) []byte {
s := []byte(salt)
p := append(pass, s...)
h := sha256.Sum256(p)
return h[:]
}
}
// CryptEncoderSHA384 encodes a password with SHA384
func CryptEncoderSHA384(pass []byte, hash interface{}) []byte {
h := sha512.Sum384(pass)
return h[:]
}
// CryptEncoderSSHA384 encodes a password with SHA384
// with the configured salt
func CryptEncoderSSHA384(salt string) func(pass []byte, hash interface{}) []byte {
return func(pass []byte, hash interface{}) []byte {
s := []byte(salt)
p := append(pass, s...)
h := sha512.Sum384(p)
return h[:]
}
}
// CryptEncoderSHA512 encodes a password with SHA512
func CryptEncoderSHA512(pass []byte, hash interface{}) []byte {
h := sha512.Sum512(pass)
return h[:]
}
// CryptEncoderSSHA512 encodes a password with SHA512
// with the configured salt
func CryptEncoderSSHA512(salt string) func(pass []byte, hash interface{}) []byte {
return func(pass []byte, hash interface{}) []byte {
s := []byte(salt)
p := append(pass, s...)
h := sha512.Sum512(p)
return h[:]
}
}
// EOF

70
vendor/github.com/mattn/go-sqlite3/sqlite3_go18.go generated vendored Normal file
View file

@ -0,0 +1,70 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build cgo
// +build go1.8
package sqlite3
import (
"database/sql/driver"
"context"
)
// Ping implement Pinger.
func (c *SQLiteConn) Ping(ctx context.Context) error {
if c.db == nil {
// must be ErrBadConn for sql to close the database
return driver.ErrBadConn
}
return nil
}
// QueryContext implement QueryerContext.
func (c *SQLiteConn) QueryContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Rows, error) {
list := make([]namedValue, len(args))
for i, nv := range args {
list[i] = namedValue(nv)
}
return c.query(ctx, query, list)
}
// ExecContext implement ExecerContext.
func (c *SQLiteConn) ExecContext(ctx context.Context, query string, args []driver.NamedValue) (driver.Result, error) {
list := make([]namedValue, len(args))
for i, nv := range args {
list[i] = namedValue(nv)
}
return c.exec(ctx, query, list)
}
// PrepareContext implement ConnPrepareContext.
func (c *SQLiteConn) PrepareContext(ctx context.Context, query string) (driver.Stmt, error) {
return c.prepare(ctx, query)
}
// BeginTx implement ConnBeginTx.
func (c *SQLiteConn) BeginTx(ctx context.Context, opts driver.TxOptions) (driver.Tx, error) {
return c.begin(ctx)
}
// QueryContext implement QueryerContext.
func (s *SQLiteStmt) QueryContext(ctx context.Context, args []driver.NamedValue) (driver.Rows, error) {
list := make([]namedValue, len(args))
for i, nv := range args {
list[i] = namedValue(nv)
}
return s.query(ctx, list)
}
// ExecContext implement ExecerContext.
func (s *SQLiteStmt) ExecContext(ctx context.Context, args []driver.NamedValue) (driver.Result, error) {
list := make([]namedValue, len(args))
for i, nv := range args {
list[i] = namedValue(nv)
}
return s.exec(ctx, list)
}

View file

@ -0,0 +1,19 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build libsqlite3
package sqlite3
/*
#cgo CFLAGS: -DUSE_LIBSQLITE3
#cgo linux LDFLAGS: -lsqlite3
#cgo darwin LDFLAGS: -L/usr/local/opt/sqlite/lib -lsqlite3
#cgo darwin CFLAGS: -I/usr/local/opt/sqlite/include
#cgo openbsd LDFLAGS: -lsqlite3
#cgo solaris LDFLAGS: -lsqlite3
#cgo windows LDFLAGS: -lsqlite3
*/
import "C"

View file

@ -0,0 +1,84 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !sqlite_omit_load_extension
package sqlite3
/*
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
*/
import "C"
import (
"errors"
"unsafe"
)
func (c *SQLiteConn) loadExtensions(extensions []string) error {
rv := C.sqlite3_enable_load_extension(c.db, 1)
if rv != C.SQLITE_OK {
return errors.New(C.GoString(C.sqlite3_errmsg(c.db)))
}
for _, extension := range extensions {
if err := c.loadExtension(extension, nil); err != nil {
C.sqlite3_enable_load_extension(c.db, 0)
return err
}
}
rv = C.sqlite3_enable_load_extension(c.db, 0)
if rv != C.SQLITE_OK {
return errors.New(C.GoString(C.sqlite3_errmsg(c.db)))
}
return nil
}
// LoadExtension load the sqlite3 extension.
func (c *SQLiteConn) LoadExtension(lib string, entry string) error {
rv := C.sqlite3_enable_load_extension(c.db, 1)
if rv != C.SQLITE_OK {
return errors.New(C.GoString(C.sqlite3_errmsg(c.db)))
}
if err := c.loadExtension(lib, &entry); err != nil {
C.sqlite3_enable_load_extension(c.db, 0)
return err
}
rv = C.sqlite3_enable_load_extension(c.db, 0)
if rv != C.SQLITE_OK {
return errors.New(C.GoString(C.sqlite3_errmsg(c.db)))
}
return nil
}
func (c *SQLiteConn) loadExtension(lib string, entry *string) error {
clib := C.CString(lib)
defer C.free(unsafe.Pointer(clib))
var centry *C.char
if entry != nil {
centry = C.CString(*entry)
defer C.free(unsafe.Pointer(centry))
}
var errMsg *C.char
defer C.sqlite3_free(unsafe.Pointer(errMsg))
rv := C.sqlite3_load_extension(c.db, clib, centry, &errMsg)
if rv != C.SQLITE_OK {
return errors.New(C.GoString(errMsg))
}
return nil
}

View file

@ -0,0 +1,24 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_omit_load_extension
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_OMIT_LOAD_EXTENSION
*/
import "C"
import (
"errors"
)
func (c *SQLiteConn) loadExtensions(extensions []string) error {
return errors.New("Extensions have been disabled for static builds")
}
func (c *SQLiteConn) LoadExtension(lib string, entry string) error {
return errors.New("Extensions have been disabled for static builds")
}

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_allow_uri_authority
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_ALLOW_URI_AUTHORITY
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,16 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !windows
// +build sqlite_app_armor
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_ENABLE_API_ARMOR
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_foreign_keys
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_DEFAULT_FOREIGN_KEYS=1
#cgo LDFLAGS: -lm
*/
import "C"

14
vendor/github.com/mattn/go-sqlite3/sqlite3_opt_fts5.go generated vendored Normal file
View file

@ -0,0 +1,14 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_fts5 fts5
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_ENABLE_FTS5
#cgo LDFLAGS: -lm
*/
import "C"

17
vendor/github.com/mattn/go-sqlite3/sqlite3_opt_icu.go generated vendored Normal file
View file

@ -0,0 +1,17 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_icu icu
package sqlite3
/*
#cgo LDFLAGS: -licuuc -licui18n
#cgo CFLAGS: -DSQLITE_ENABLE_ICU
#cgo darwin CFLAGS: -I/usr/local/opt/icu4c/include
#cgo darwin LDFLAGS: -L/usr/local/opt/icu4c/lib
#cgo openbsd LDFLAGS: -lsqlite3
*/
import "C"

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_introspect
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_INTROSPECTION_PRAGMAS
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,13 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_json sqlite_json1 json1
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_ENABLE_JSON1
*/
import "C"

View file

@ -0,0 +1,20 @@
// Copyright (C) 2019 G.J.R. Timmer <gjr.timmer@gmail.com>.
// Copyright (C) 2018 segment.com <friends@segment.com>
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build cgo
package sqlite3
// SQLitePreUpdateData represents all of the data available during a
// pre-update hook call.
type SQLitePreUpdateData struct {
Conn *SQLiteConn
Op int
DatabaseName string
TableName string
OldRowID int64
NewRowID int64
}

View file

@ -0,0 +1,112 @@
// Copyright (C) 2019 G.J.R. Timmer <gjr.timmer@gmail.com>.
// Copyright (C) 2018 segment.com <friends@segment.com>
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_preupdate_hook
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_ENABLE_PREUPDATE_HOOK
#cgo LDFLAGS: -lm
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
#include <string.h>
void preUpdateHookTrampoline(void*, sqlite3 *, int, char *, char *, sqlite3_int64, sqlite3_int64);
*/
import "C"
import (
"errors"
"unsafe"
)
// RegisterPreUpdateHook sets the pre-update hook for a connection.
//
// The callback is passed a SQLitePreUpdateData struct with the data for
// the update, as well as methods for fetching copies of impacted data.
//
// If there is an existing update hook for this connection, it will be
// removed. If callback is nil the existing hook (if any) will be removed
// without creating a new one.
func (c *SQLiteConn) RegisterPreUpdateHook(callback func(SQLitePreUpdateData)) {
if callback == nil {
C.sqlite3_preupdate_hook(c.db, nil, nil)
} else {
C.sqlite3_preupdate_hook(c.db, (*[0]byte)(unsafe.Pointer(C.preUpdateHookTrampoline)), unsafe.Pointer(newHandle(c, callback)))
}
}
// Depth returns the source path of the write, see sqlite3_preupdate_depth()
func (d *SQLitePreUpdateData) Depth() int {
return int(C.sqlite3_preupdate_depth(d.Conn.db))
}
// Count returns the number of columns in the row
func (d *SQLitePreUpdateData) Count() int {
return int(C.sqlite3_preupdate_count(d.Conn.db))
}
func (d *SQLitePreUpdateData) row(dest []interface{}, new bool) error {
for i := 0; i < d.Count() && i < len(dest); i++ {
var val *C.sqlite3_value
var src interface{}
// Initially I tried making this just a function pointer argument, but
// it's absurdly complicated to pass C function pointers.
if new {
C.sqlite3_preupdate_new(d.Conn.db, C.int(i), &val)
} else {
C.sqlite3_preupdate_old(d.Conn.db, C.int(i), &val)
}
switch C.sqlite3_value_type(val) {
case C.SQLITE_INTEGER:
src = int64(C.sqlite3_value_int64(val))
case C.SQLITE_FLOAT:
src = float64(C.sqlite3_value_double(val))
case C.SQLITE_BLOB:
len := C.sqlite3_value_bytes(val)
blobptr := C.sqlite3_value_blob(val)
src = C.GoBytes(blobptr, len)
case C.SQLITE_TEXT:
len := C.sqlite3_value_bytes(val)
cstrptr := unsafe.Pointer(C.sqlite3_value_text(val))
src = C.GoBytes(cstrptr, len)
case C.SQLITE_NULL:
src = nil
}
err := convertAssign(&dest[i], src)
if err != nil {
return err
}
}
return nil
}
// Old populates dest with the row data to be replaced. This works similar to
// database/sql's Rows.Scan()
func (d *SQLitePreUpdateData) Old(dest ...interface{}) error {
if d.Op == SQLITE_INSERT {
return errors.New("There is no old row for INSERT operations")
}
return d.row(dest, false)
}
// New populates dest with the replacement row data. This works similar to
// database/sql's Rows.Scan()
func (d *SQLitePreUpdateData) New(dest ...interface{}) error {
if d.Op == SQLITE_DELETE {
return errors.New("There is no new row for DELETE operations")
}
return d.row(dest, true)
}

View file

@ -0,0 +1,21 @@
// Copyright (C) 2019 G.J.R. Timmer <gjr.timmer@gmail.com>.
// Copyright (C) 2018 segment.com <friends@segment.com>
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !sqlite_preupdate_hook,cgo
package sqlite3
// RegisterPreUpdateHook sets the pre-update hook for a connection.
//
// The callback is passed a SQLitePreUpdateData struct with the data for
// the update, as well as methods for fetching copies of impacted data.
//
// If there is an existing update hook for this connection, it will be
// removed. If callback is nil the existing hook (if any) will be removed
// without creating a new one.
func (c *SQLiteConn) RegisterPreUpdateHook(callback func(SQLitePreUpdateData)) {
// NOOP
}

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_secure_delete
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_SECURE_DELETE=1
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_secure_delete_fast
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_SECURE_DELETE=FAST
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_stat4
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_ENABLE_STAT4
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,85 @@
// Copyright (C) 2018 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
#ifdef SQLITE_ENABLE_UNLOCK_NOTIFY
#include <stdio.h>
#include <sqlite3-binding.h>
extern int unlock_notify_wait(sqlite3 *db);
int
_sqlite3_step_blocking(sqlite3_stmt *stmt)
{
int rv;
sqlite3* db;
db = sqlite3_db_handle(stmt);
for (;;) {
rv = sqlite3_step(stmt);
if (rv != SQLITE_LOCKED) {
break;
}
if (sqlite3_extended_errcode(db) != SQLITE_LOCKED_SHAREDCACHE) {
break;
}
rv = unlock_notify_wait(db);
if (rv != SQLITE_OK) {
break;
}
sqlite3_reset(stmt);
}
return rv;
}
int
_sqlite3_step_row_blocking(sqlite3_stmt* stmt, long long* rowid, long long* changes)
{
int rv;
sqlite3* db;
db = sqlite3_db_handle(stmt);
for (;;) {
rv = sqlite3_step(stmt);
if (rv!=SQLITE_LOCKED) {
break;
}
if (sqlite3_extended_errcode(db) != SQLITE_LOCKED_SHAREDCACHE) {
break;
}
rv = unlock_notify_wait(db);
if (rv != SQLITE_OK) {
break;
}
sqlite3_reset(stmt);
}
*rowid = (long long) sqlite3_last_insert_rowid(db);
*changes = (long long) sqlite3_changes(db);
return rv;
}
int
_sqlite3_prepare_v2_blocking(sqlite3 *db, const char *zSql, int nBytes, sqlite3_stmt **ppStmt, const char **pzTail)
{
int rv;
for (;;) {
rv = sqlite3_prepare_v2(db, zSql, nBytes, ppStmt, pzTail);
if (rv!=SQLITE_LOCKED) {
break;
}
if (sqlite3_extended_errcode(db) != SQLITE_LOCKED_SHAREDCACHE) {
break;
}
rv = unlock_notify_wait(db);
if (rv != SQLITE_OK) {
break;
}
}
return rv;
}
#endif

View file

@ -0,0 +1,93 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build cgo
// +build sqlite_unlock_notify
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_ENABLE_UNLOCK_NOTIFY
#include <stdlib.h>
#include <sqlite3-binding.h>
extern void unlock_notify_callback(void *arg, int argc);
*/
import "C"
import (
"fmt"
"math"
"sync"
"unsafe"
)
type unlock_notify_table struct {
sync.Mutex
seqnum uint
table map[uint]chan struct{}
}
var unt unlock_notify_table = unlock_notify_table{table: make(map[uint]chan struct{})}
func (t *unlock_notify_table) add(c chan struct{}) uint {
t.Lock()
defer t.Unlock()
h := t.seqnum
t.table[h] = c
t.seqnum++
return h
}
func (t *unlock_notify_table) remove(h uint) {
t.Lock()
defer t.Unlock()
delete(t.table, h)
}
func (t *unlock_notify_table) get(h uint) chan struct{} {
t.Lock()
defer t.Unlock()
c, ok := t.table[h]
if !ok {
panic(fmt.Sprintf("Non-existent key for unlcok-notify channel: %d", h))
}
return c
}
//export unlock_notify_callback
func unlock_notify_callback(argv unsafe.Pointer, argc C.int) {
for i := 0; i < int(argc); i++ {
parg := ((*(*[(math.MaxInt32 - 1) / unsafe.Sizeof((*C.uint)(nil))]*[1]uint)(argv))[i])
arg := *parg
h := arg[0]
c := unt.get(h)
c <- struct{}{}
}
}
//export unlock_notify_wait
func unlock_notify_wait(db *C.sqlite3) C.int {
// It has to be a bufferred channel to not block in sqlite_unlock_notify
// as sqlite_unlock_notify could invoke the callback before it returns.
c := make(chan struct{}, 1)
defer close(c)
h := unt.add(c)
defer unt.remove(h)
pargv := C.malloc(C.sizeof_uint)
defer C.free(pargv)
argv := (*[1]uint)(pargv)
argv[0] = h
if rv := C.sqlite3_unlock_notify(db, (*[0]byte)(C.unlock_notify_callback), unsafe.Pointer(pargv)); rv != C.SQLITE_OK {
return rv
}
<-c
return C.SQLITE_OK
}

View file

@ -0,0 +1,289 @@
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_userauth
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_USER_AUTHENTICATION
#cgo LDFLAGS: -lm
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
static int
_sqlite3_user_authenticate(sqlite3* db, const char* zUsername, const char* aPW, int nPW)
{
return sqlite3_user_authenticate(db, zUsername, aPW, nPW);
}
static int
_sqlite3_user_add(sqlite3* db, const char* zUsername, const char* aPW, int nPW, int isAdmin)
{
return sqlite3_user_add(db, zUsername, aPW, nPW, isAdmin);
}
static int
_sqlite3_user_change(sqlite3* db, const char* zUsername, const char* aPW, int nPW, int isAdmin)
{
return sqlite3_user_change(db, zUsername, aPW, nPW, isAdmin);
}
static int
_sqlite3_user_delete(sqlite3* db, const char* zUsername)
{
return sqlite3_user_delete(db, zUsername);
}
static int
_sqlite3_auth_enabled(sqlite3* db)
{
int exists = -1;
sqlite3_stmt *stmt;
sqlite3_prepare_v2(db, "select count(type) from sqlite_master WHERE type='table' and name='sqlite_user';", -1, &stmt, NULL);
while ( sqlite3_step(stmt) == SQLITE_ROW) {
exists = sqlite3_column_int(stmt, 0);
}
sqlite3_finalize(stmt);
return exists;
}
*/
import "C"
import (
"errors"
"unsafe"
)
const (
SQLITE_AUTH = C.SQLITE_AUTH
)
var (
ErrUnauthorized = errors.New("SQLITE_AUTH: Unauthorized")
ErrAdminRequired = errors.New("SQLITE_AUTH: Unauthorized; Admin Privileges Required")
)
// Authenticate will perform an authentication of the provided username
// and password against the database.
//
// If a database contains the SQLITE_USER table, then the
// call to Authenticate must be invoked with an
// appropriate username and password prior to enable read and write
//access to the database.
//
// Return SQLITE_OK on success or SQLITE_ERROR if the username/password
// combination is incorrect or unknown.
//
// If the SQLITE_USER table is not present in the database file, then
// this interface is a harmless no-op returnning SQLITE_OK.
func (c *SQLiteConn) Authenticate(username, password string) error {
rv := c.authenticate(username, password)
switch rv {
case C.SQLITE_ERROR, C.SQLITE_AUTH:
return ErrUnauthorized
case C.SQLITE_OK:
return nil
default:
return c.lastError()
}
}
// authenticate provides the actual authentication to SQLite.
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authenticate(username, password string) int {
// Allocate C Variables
cuser := C.CString(username)
cpass := C.CString(password)
// Free C Variables
defer func() {
C.free(unsafe.Pointer(cuser))
C.free(unsafe.Pointer(cpass))
}()
return int(C._sqlite3_user_authenticate(c.db, cuser, cpass, C.int(len(password))))
}
// AuthUserAdd can be used (by an admin user only)
// to create a new user. When called on a no-authentication-required
// database, this routine converts the database into an authentication-
// required database, automatically makes the added user an
// administrator, and logs in the current connection as that user.
// The AuthUserAdd only works for the "main" database, not
// for any ATTACH-ed databases. Any call to AuthUserAdd by a
// non-admin user results in an error.
func (c *SQLiteConn) AuthUserAdd(username, password string, admin bool) error {
isAdmin := 0
if admin {
isAdmin = 1
}
rv := c.authUserAdd(username, password, isAdmin)
switch rv {
case C.SQLITE_ERROR, C.SQLITE_AUTH:
return ErrAdminRequired
case C.SQLITE_OK:
return nil
default:
return c.lastError()
}
}
// authUserAdd enables the User Authentication if not enabled.
// Otherwise it will add a user.
//
// When user authentication is already enabled then this function
// can only be called by an admin.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authUserAdd(username, password string, admin int) int {
// Allocate C Variables
cuser := C.CString(username)
cpass := C.CString(password)
// Free C Variables
defer func() {
C.free(unsafe.Pointer(cuser))
C.free(unsafe.Pointer(cpass))
}()
return int(C._sqlite3_user_add(c.db, cuser, cpass, C.int(len(password)), C.int(admin)))
}
// AuthUserChange can be used to change a users
// login credentials or admin privilege. Any user can change their own
// login credentials. Only an admin user can change another users login
// credentials or admin privilege setting. No user may change their own
// admin privilege setting.
func (c *SQLiteConn) AuthUserChange(username, password string, admin bool) error {
isAdmin := 0
if admin {
isAdmin = 1
}
rv := c.authUserChange(username, password, isAdmin)
switch rv {
case C.SQLITE_ERROR, C.SQLITE_AUTH:
return ErrAdminRequired
case C.SQLITE_OK:
return nil
default:
return c.lastError()
}
}
// authUserChange allows to modify a user.
// Users can change their own password.
//
// Only admins can change passwords for other users
// and modify the admin flag.
//
// The admin flag of the current logged in user cannot be changed.
// THis ensures that their is always an admin.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authUserChange(username, password string, admin int) int {
// Allocate C Variables
cuser := C.CString(username)
cpass := C.CString(password)
// Free C Variables
defer func() {
C.free(unsafe.Pointer(cuser))
C.free(unsafe.Pointer(cpass))
}()
return int(C._sqlite3_user_change(c.db, cuser, cpass, C.int(len(password)), C.int(admin)))
}
// AuthUserDelete can be used (by an admin user only)
// to delete a user. The currently logged-in user cannot be deleted,
// which guarantees that there is always an admin user and hence that
// the database cannot be converted into a no-authentication-required
// database.
func (c *SQLiteConn) AuthUserDelete(username string) error {
rv := c.authUserDelete(username)
switch rv {
case C.SQLITE_ERROR, C.SQLITE_AUTH:
return ErrAdminRequired
case C.SQLITE_OK:
return nil
default:
return c.lastError()
}
}
// authUserDelete can be used to delete a user.
//
// This function can only be executed by an admin.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authUserDelete(username string) int {
// Allocate C Variables
cuser := C.CString(username)
// Free C Variables
defer func() {
C.free(unsafe.Pointer(cuser))
}()
return int(C._sqlite3_user_delete(c.db, cuser))
}
// AuthEnabled checks if the database is protected by user authentication
func (c *SQLiteConn) AuthEnabled() (exists bool) {
rv := c.authEnabled()
if rv == 1 {
exists = true
}
return
}
// authEnabled perform the actual check for user authentication.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// 0 - Disabled
// 1 - Enabled
func (c *SQLiteConn) authEnabled() int {
return int(C._sqlite3_auth_enabled(c.db))
}
// EOF

View file

@ -0,0 +1,152 @@
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !sqlite_userauth
package sqlite3
import (
"C"
)
// Authenticate will perform an authentication of the provided username
// and password against the database.
//
// If a database contains the SQLITE_USER table, then the
// call to Authenticate must be invoked with an
// appropriate username and password prior to enable read and write
//access to the database.
//
// Return SQLITE_OK on success or SQLITE_ERROR if the username/password
// combination is incorrect or unknown.
//
// If the SQLITE_USER table is not present in the database file, then
// this interface is a harmless no-op returnning SQLITE_OK.
func (c *SQLiteConn) Authenticate(username, password string) error {
// NOOP
return nil
}
// authenticate provides the actual authentication to SQLite.
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authenticate(username, password string) int {
// NOOP
return 0
}
// AuthUserAdd can be used (by an admin user only)
// to create a new user. When called on a no-authentication-required
// database, this routine converts the database into an authentication-
// required database, automatically makes the added user an
// administrator, and logs in the current connection as that user.
// The AuthUserAdd only works for the "main" database, not
// for any ATTACH-ed databases. Any call to AuthUserAdd by a
// non-admin user results in an error.
func (c *SQLiteConn) AuthUserAdd(username, password string, admin bool) error {
// NOOP
return nil
}
// authUserAdd enables the User Authentication if not enabled.
// Otherwise it will add a user.
//
// When user authentication is already enabled then this function
// can only be called by an admin.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authUserAdd(username, password string, admin int) int {
// NOOP
return 0
}
// AuthUserChange can be used to change a users
// login credentials or admin privilege. Any user can change their own
// login credentials. Only an admin user can change another users login
// credentials or admin privilege setting. No user may change their own
// admin privilege setting.
func (c *SQLiteConn) AuthUserChange(username, password string, admin bool) error {
// NOOP
return nil
}
// authUserChange allows to modify a user.
// Users can change their own password.
//
// Only admins can change passwords for other users
// and modify the admin flag.
//
// The admin flag of the current logged in user cannot be changed.
// THis ensures that their is always an admin.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authUserChange(username, password string, admin int) int {
// NOOP
return 0
}
// AuthUserDelete can be used (by an admin user only)
// to delete a user. The currently logged-in user cannot be deleted,
// which guarantees that there is always an admin user and hence that
// the database cannot be converted into a no-authentication-required
// database.
func (c *SQLiteConn) AuthUserDelete(username string) error {
// NOOP
return nil
}
// authUserDelete can be used to delete a user.
//
// This function can only be executed by an admin.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// C.SQLITE_OK (0)
// C.SQLITE_ERROR (1)
// C.SQLITE_AUTH (23)
func (c *SQLiteConn) authUserDelete(username string) int {
// NOOP
return 0
}
// AuthEnabled checks if the database is protected by user authentication
func (c *SQLiteConn) AuthEnabled() (exists bool) {
// NOOP
return false
}
// authEnabled perform the actual check for user authentication.
//
// This is not exported for usage in Go.
// It is however exported for usage within SQL by the user.
//
// Returns:
// 0 - Disabled
// 1 - Enabled
func (c *SQLiteConn) authEnabled() int {
// NOOP
return 0
}
// EOF

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_vacuum_full
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_DEFAULT_AUTOVACUUM=1
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,15 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_vacuum_incr
package sqlite3
/*
#cgo CFLAGS: -DSQLITE_DEFAULT_AUTOVACUUM=2
#cgo LDFLAGS: -lm
*/
import "C"

View file

@ -0,0 +1,660 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_vtable vtable
package sqlite3
/*
#cgo CFLAGS: -std=gnu99
#cgo CFLAGS: -DSQLITE_ENABLE_RTREE
#cgo CFLAGS: -DSQLITE_THREADSAFE
#cgo CFLAGS: -DSQLITE_ENABLE_FTS3
#cgo CFLAGS: -DSQLITE_ENABLE_FTS3_PARENTHESIS
#cgo CFLAGS: -DSQLITE_ENABLE_FTS4_UNICODE61
#cgo CFLAGS: -DSQLITE_TRACE_SIZE_LIMIT=15
#cgo CFLAGS: -DSQLITE_ENABLE_COLUMN_METADATA=1
#cgo CFLAGS: -Wno-deprecated-declarations
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
#include <stdint.h>
#include <memory.h>
static inline char *_sqlite3_mprintf(char *zFormat, char *arg) {
return sqlite3_mprintf(zFormat, arg);
}
typedef struct goVTab goVTab;
struct goVTab {
sqlite3_vtab base;
void *vTab;
};
uintptr_t goMInit(void *db, void *pAux, int argc, char **argv, char **pzErr, int isCreate);
static int cXInit(sqlite3 *db, void *pAux, int argc, const char *const*argv, sqlite3_vtab **ppVTab, char **pzErr, int isCreate) {
void *vTab = (void *)goMInit(db, pAux, argc, (char**)argv, pzErr, isCreate);
if (!vTab || *pzErr) {
return SQLITE_ERROR;
}
goVTab *pvTab = (goVTab *)sqlite3_malloc(sizeof(goVTab));
if (!pvTab) {
*pzErr = sqlite3_mprintf("%s", "Out of memory");
return SQLITE_NOMEM;
}
memset(pvTab, 0, sizeof(goVTab));
pvTab->vTab = vTab;
*ppVTab = (sqlite3_vtab *)pvTab;
*pzErr = 0;
return SQLITE_OK;
}
static inline int cXCreate(sqlite3 *db, void *pAux, int argc, const char *const*argv, sqlite3_vtab **ppVTab, char **pzErr) {
return cXInit(db, pAux, argc, argv, ppVTab, pzErr, 1);
}
static inline int cXConnect(sqlite3 *db, void *pAux, int argc, const char *const*argv, sqlite3_vtab **ppVTab, char **pzErr) {
return cXInit(db, pAux, argc, argv, ppVTab, pzErr, 0);
}
char* goVBestIndex(void *pVTab, void *icp);
static inline int cXBestIndex(sqlite3_vtab *pVTab, sqlite3_index_info *info) {
char *pzErr = goVBestIndex(((goVTab*)pVTab)->vTab, info);
if (pzErr) {
if (pVTab->zErrMsg)
sqlite3_free(pVTab->zErrMsg);
pVTab->zErrMsg = pzErr;
return SQLITE_ERROR;
}
return SQLITE_OK;
}
char* goVRelease(void *pVTab, int isDestroy);
static int cXRelease(sqlite3_vtab *pVTab, int isDestroy) {
char *pzErr = goVRelease(((goVTab*)pVTab)->vTab, isDestroy);
if (pzErr) {
if (pVTab->zErrMsg)
sqlite3_free(pVTab->zErrMsg);
pVTab->zErrMsg = pzErr;
return SQLITE_ERROR;
}
if (pVTab->zErrMsg)
sqlite3_free(pVTab->zErrMsg);
sqlite3_free(pVTab);
return SQLITE_OK;
}
static inline int cXDisconnect(sqlite3_vtab *pVTab) {
return cXRelease(pVTab, 0);
}
static inline int cXDestroy(sqlite3_vtab *pVTab) {
return cXRelease(pVTab, 1);
}
typedef struct goVTabCursor goVTabCursor;
struct goVTabCursor {
sqlite3_vtab_cursor base;
void *vTabCursor;
};
uintptr_t goVOpen(void *pVTab, char **pzErr);
static int cXOpen(sqlite3_vtab *pVTab, sqlite3_vtab_cursor **ppCursor) {
void *vTabCursor = (void *)goVOpen(((goVTab*)pVTab)->vTab, &(pVTab->zErrMsg));
goVTabCursor *pCursor = (goVTabCursor *)sqlite3_malloc(sizeof(goVTabCursor));
if (!pCursor) {
return SQLITE_NOMEM;
}
memset(pCursor, 0, sizeof(goVTabCursor));
pCursor->vTabCursor = vTabCursor;
*ppCursor = (sqlite3_vtab_cursor *)pCursor;
return SQLITE_OK;
}
static int setErrMsg(sqlite3_vtab_cursor *pCursor, char *pzErr) {
if (pCursor->pVtab->zErrMsg)
sqlite3_free(pCursor->pVtab->zErrMsg);
pCursor->pVtab->zErrMsg = pzErr;
return SQLITE_ERROR;
}
char* goVClose(void *pCursor);
static int cXClose(sqlite3_vtab_cursor *pCursor) {
char *pzErr = goVClose(((goVTabCursor*)pCursor)->vTabCursor);
if (pzErr) {
return setErrMsg(pCursor, pzErr);
}
sqlite3_free(pCursor);
return SQLITE_OK;
}
char* goVFilter(void *pCursor, int idxNum, char* idxName, int argc, sqlite3_value **argv);
static int cXFilter(sqlite3_vtab_cursor *pCursor, int idxNum, const char *idxStr, int argc, sqlite3_value **argv) {
char *pzErr = goVFilter(((goVTabCursor*)pCursor)->vTabCursor, idxNum, (char*)idxStr, argc, argv);
if (pzErr) {
return setErrMsg(pCursor, pzErr);
}
return SQLITE_OK;
}
char* goVNext(void *pCursor);
static int cXNext(sqlite3_vtab_cursor *pCursor) {
char *pzErr = goVNext(((goVTabCursor*)pCursor)->vTabCursor);
if (pzErr) {
return setErrMsg(pCursor, pzErr);
}
return SQLITE_OK;
}
int goVEof(void *pCursor);
static inline int cXEof(sqlite3_vtab_cursor *pCursor) {
return goVEof(((goVTabCursor*)pCursor)->vTabCursor);
}
char* goVColumn(void *pCursor, void *cp, int col);
static int cXColumn(sqlite3_vtab_cursor *pCursor, sqlite3_context *ctx, int i) {
char *pzErr = goVColumn(((goVTabCursor*)pCursor)->vTabCursor, ctx, i);
if (pzErr) {
return setErrMsg(pCursor, pzErr);
}
return SQLITE_OK;
}
char* goVRowid(void *pCursor, sqlite3_int64 *pRowid);
static int cXRowid(sqlite3_vtab_cursor *pCursor, sqlite3_int64 *pRowid) {
char *pzErr = goVRowid(((goVTabCursor*)pCursor)->vTabCursor, pRowid);
if (pzErr) {
return setErrMsg(pCursor, pzErr);
}
return SQLITE_OK;
}
char* goVUpdate(void *pVTab, int argc, sqlite3_value **argv, sqlite3_int64 *pRowid);
static int cXUpdate(sqlite3_vtab *pVTab, int argc, sqlite3_value **argv, sqlite3_int64 *pRowid) {
char *pzErr = goVUpdate(((goVTab*)pVTab)->vTab, argc, argv, pRowid);
if (pzErr) {
if (pVTab->zErrMsg)
sqlite3_free(pVTab->zErrMsg);
pVTab->zErrMsg = pzErr;
return SQLITE_ERROR;
}
return SQLITE_OK;
}
static sqlite3_module goModule = {
0, // iVersion
cXCreate, // xCreate - create a table
cXConnect, // xConnect - connect to an existing table
cXBestIndex, // xBestIndex - Determine search strategy
cXDisconnect, // xDisconnect - Disconnect from a table
cXDestroy, // xDestroy - Drop a table
cXOpen, // xOpen - open a cursor
cXClose, // xClose - close a cursor
cXFilter, // xFilter - configure scan constraints
cXNext, // xNext - advance a cursor
cXEof, // xEof
cXColumn, // xColumn - read data
cXRowid, // xRowid - read data
cXUpdate, // xUpdate - write data
// Not implemented
0, // xBegin - begin transaction
0, // xSync - sync transaction
0, // xCommit - commit transaction
0, // xRollback - rollback transaction
0, // xFindFunction - function overloading
0, // xRename - rename the table
0, // xSavepoint
0, // xRelease
0 // xRollbackTo
};
void goMDestroy(void*);
static int _sqlite3_create_module(sqlite3 *db, const char *zName, uintptr_t pClientData) {
return sqlite3_create_module_v2(db, zName, &goModule, (void*) pClientData, goMDestroy);
}
*/
import "C"
import (
"fmt"
"math"
"reflect"
"unsafe"
)
type sqliteModule struct {
c *SQLiteConn
name string
module Module
}
type sqliteVTab struct {
module *sqliteModule
vTab VTab
}
type sqliteVTabCursor struct {
vTab *sqliteVTab
vTabCursor VTabCursor
}
// Op is type of operations.
type Op uint8
// Op mean identity of operations.
const (
OpEQ Op = 2
OpGT = 4
OpLE = 8
OpLT = 16
OpGE = 32
OpMATCH = 64
OpLIKE = 65 /* 3.10.0 and later only */
OpGLOB = 66 /* 3.10.0 and later only */
OpREGEXP = 67 /* 3.10.0 and later only */
OpScanUnique = 1 /* Scan visits at most 1 row */
)
// InfoConstraint give information of constraint.
type InfoConstraint struct {
Column int
Op Op
Usable bool
}
// InfoOrderBy give information of order-by.
type InfoOrderBy struct {
Column int
Desc bool
}
func constraints(info *C.sqlite3_index_info) []InfoConstraint {
slice := *(*[]C.struct_sqlite3_index_constraint)(unsafe.Pointer(&reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(info.aConstraint)),
Len: int(info.nConstraint),
Cap: int(info.nConstraint),
}))
cst := make([]InfoConstraint, 0, len(slice))
for _, c := range slice {
var usable bool
if c.usable > 0 {
usable = true
}
cst = append(cst, InfoConstraint{
Column: int(c.iColumn),
Op: Op(c.op),
Usable: usable,
})
}
return cst
}
func orderBys(info *C.sqlite3_index_info) []InfoOrderBy {
slice := *(*[]C.struct_sqlite3_index_orderby)(unsafe.Pointer(&reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(info.aOrderBy)),
Len: int(info.nOrderBy),
Cap: int(info.nOrderBy),
}))
ob := make([]InfoOrderBy, 0, len(slice))
for _, c := range slice {
var desc bool
if c.desc > 0 {
desc = true
}
ob = append(ob, InfoOrderBy{
Column: int(c.iColumn),
Desc: desc,
})
}
return ob
}
// IndexResult is a Go struct representation of what eventually ends up in the
// output fields for `sqlite3_index_info`
// See: https://www.sqlite.org/c3ref/index_info.html
type IndexResult struct {
Used []bool // aConstraintUsage
IdxNum int
IdxStr string
AlreadyOrdered bool // orderByConsumed
EstimatedCost float64
EstimatedRows float64
}
// mPrintf is a utility wrapper around sqlite3_mprintf
func mPrintf(format, arg string) *C.char {
cf := C.CString(format)
defer C.free(unsafe.Pointer(cf))
ca := C.CString(arg)
defer C.free(unsafe.Pointer(ca))
return C._sqlite3_mprintf(cf, ca)
}
//export goMInit
func goMInit(db, pClientData unsafe.Pointer, argc C.int, argv **C.char, pzErr **C.char, isCreate C.int) C.uintptr_t {
m := lookupHandle(pClientData).(*sqliteModule)
if m.c.db != (*C.sqlite3)(db) {
*pzErr = mPrintf("%s", "Inconsistent db handles")
return 0
}
args := make([]string, argc)
var A []*C.char
slice := reflect.SliceHeader{Data: uintptr(unsafe.Pointer(argv)), Len: int(argc), Cap: int(argc)}
a := reflect.NewAt(reflect.TypeOf(A), unsafe.Pointer(&slice)).Elem().Interface()
for i, s := range a.([]*C.char) {
args[i] = C.GoString(s)
}
var vTab VTab
var err error
if isCreate == 1 {
vTab, err = m.module.Create(m.c, args)
} else {
vTab, err = m.module.Connect(m.c, args)
}
if err != nil {
*pzErr = mPrintf("%s", err.Error())
return 0
}
vt := sqliteVTab{m, vTab}
*pzErr = nil
return C.uintptr_t(uintptr(newHandle(m.c, &vt)))
}
//export goVRelease
func goVRelease(pVTab unsafe.Pointer, isDestroy C.int) *C.char {
vt := lookupHandle(pVTab).(*sqliteVTab)
var err error
if isDestroy == 1 {
err = vt.vTab.Destroy()
} else {
err = vt.vTab.Disconnect()
}
if err != nil {
return mPrintf("%s", err.Error())
}
return nil
}
//export goVOpen
func goVOpen(pVTab unsafe.Pointer, pzErr **C.char) C.uintptr_t {
vt := lookupHandle(pVTab).(*sqliteVTab)
vTabCursor, err := vt.vTab.Open()
if err != nil {
*pzErr = mPrintf("%s", err.Error())
return 0
}
vtc := sqliteVTabCursor{vt, vTabCursor}
*pzErr = nil
return C.uintptr_t(uintptr(newHandle(vt.module.c, &vtc)))
}
//export goVBestIndex
func goVBestIndex(pVTab unsafe.Pointer, icp unsafe.Pointer) *C.char {
vt := lookupHandle(pVTab).(*sqliteVTab)
info := (*C.sqlite3_index_info)(icp)
csts := constraints(info)
res, err := vt.vTab.BestIndex(csts, orderBys(info))
if err != nil {
return mPrintf("%s", err.Error())
}
if len(res.Used) != len(csts) {
return mPrintf("Result.Used != expected value", "")
}
// Get a pointer to constraint_usage struct so we can update in place.
slice := *(*[]C.struct_sqlite3_index_constraint_usage)(unsafe.Pointer(&reflect.SliceHeader{
Data: uintptr(unsafe.Pointer(info.aConstraintUsage)),
Len: int(info.nConstraint),
Cap: int(info.nConstraint),
}))
index := 1
for i := range slice {
if res.Used[i] {
slice[i].argvIndex = C.int(index)
slice[i].omit = C.uchar(1)
index++
}
}
info.idxNum = C.int(res.IdxNum)
idxStr := C.CString(res.IdxStr)
defer C.free(unsafe.Pointer(idxStr))
info.idxStr = idxStr
info.needToFreeIdxStr = C.int(0)
if res.AlreadyOrdered {
info.orderByConsumed = C.int(1)
}
info.estimatedCost = C.double(res.EstimatedCost)
info.estimatedRows = C.sqlite3_int64(res.EstimatedRows)
return nil
}
//export goVClose
func goVClose(pCursor unsafe.Pointer) *C.char {
vtc := lookupHandle(pCursor).(*sqliteVTabCursor)
err := vtc.vTabCursor.Close()
if err != nil {
return mPrintf("%s", err.Error())
}
return nil
}
//export goMDestroy
func goMDestroy(pClientData unsafe.Pointer) {
m := lookupHandle(pClientData).(*sqliteModule)
m.module.DestroyModule()
}
//export goVFilter
func goVFilter(pCursor unsafe.Pointer, idxNum C.int, idxName *C.char, argc C.int, argv **C.sqlite3_value) *C.char {
vtc := lookupHandle(pCursor).(*sqliteVTabCursor)
args := (*[(math.MaxInt32 - 1) / unsafe.Sizeof((*C.sqlite3_value)(nil))]*C.sqlite3_value)(unsafe.Pointer(argv))[:argc:argc]
vals := make([]interface{}, 0, argc)
for _, v := range args {
conv, err := callbackArgGeneric(v)
if err != nil {
return mPrintf("%s", err.Error())
}
vals = append(vals, conv.Interface())
}
err := vtc.vTabCursor.Filter(int(idxNum), C.GoString(idxName), vals)
if err != nil {
return mPrintf("%s", err.Error())
}
return nil
}
//export goVNext
func goVNext(pCursor unsafe.Pointer) *C.char {
vtc := lookupHandle(pCursor).(*sqliteVTabCursor)
err := vtc.vTabCursor.Next()
if err != nil {
return mPrintf("%s", err.Error())
}
return nil
}
//export goVEof
func goVEof(pCursor unsafe.Pointer) C.int {
vtc := lookupHandle(pCursor).(*sqliteVTabCursor)
err := vtc.vTabCursor.EOF()
if err {
return 1
}
return 0
}
//export goVColumn
func goVColumn(pCursor, cp unsafe.Pointer, col C.int) *C.char {
vtc := lookupHandle(pCursor).(*sqliteVTabCursor)
c := (*SQLiteContext)(cp)
err := vtc.vTabCursor.Column(c, int(col))
if err != nil {
return mPrintf("%s", err.Error())
}
return nil
}
//export goVRowid
func goVRowid(pCursor unsafe.Pointer, pRowid *C.sqlite3_int64) *C.char {
vtc := lookupHandle(pCursor).(*sqliteVTabCursor)
rowid, err := vtc.vTabCursor.Rowid()
if err != nil {
return mPrintf("%s", err.Error())
}
*pRowid = C.sqlite3_int64(rowid)
return nil
}
//export goVUpdate
func goVUpdate(pVTab unsafe.Pointer, argc C.int, argv **C.sqlite3_value, pRowid *C.sqlite3_int64) *C.char {
vt := lookupHandle(pVTab).(*sqliteVTab)
var tname string
if n, ok := vt.vTab.(interface {
TableName() string
}); ok {
tname = n.TableName() + " "
}
err := fmt.Errorf("virtual %s table %sis read-only", vt.module.name, tname)
if v, ok := vt.vTab.(VTabUpdater); ok {
// convert argv
args := (*[(math.MaxInt32 - 1) / unsafe.Sizeof((*C.sqlite3_value)(nil))]*C.sqlite3_value)(unsafe.Pointer(argv))[:argc:argc]
vals := make([]interface{}, 0, argc)
for _, v := range args {
conv, err := callbackArgGeneric(v)
if err != nil {
return mPrintf("%s", err.Error())
}
// work around for SQLITE_NULL
x := conv.Interface()
if z, ok := x.([]byte); ok && z == nil {
x = nil
}
vals = append(vals, x)
}
switch {
case argc == 1:
err = v.Delete(vals[0])
case argc > 1 && vals[0] == nil:
var id int64
id, err = v.Insert(vals[1], vals[2:])
if err == nil {
*pRowid = C.sqlite3_int64(id)
}
case argc > 1:
err = v.Update(vals[1], vals[2:])
}
}
if err != nil {
return mPrintf("%s", err.Error())
}
return nil
}
// Module is a "virtual table module", it defines the implementation of a
// virtual tables. See: http://sqlite.org/c3ref/module.html
type Module interface {
// http://sqlite.org/vtab.html#xcreate
Create(c *SQLiteConn, args []string) (VTab, error)
// http://sqlite.org/vtab.html#xconnect
Connect(c *SQLiteConn, args []string) (VTab, error)
// http://sqlite.org/c3ref/create_module.html
DestroyModule()
}
// VTab describes a particular instance of the virtual table.
// See: http://sqlite.org/c3ref/vtab.html
type VTab interface {
// http://sqlite.org/vtab.html#xbestindex
BestIndex([]InfoConstraint, []InfoOrderBy) (*IndexResult, error)
// http://sqlite.org/vtab.html#xdisconnect
Disconnect() error
// http://sqlite.org/vtab.html#sqlite3_module.xDestroy
Destroy() error
// http://sqlite.org/vtab.html#xopen
Open() (VTabCursor, error)
}
// VTabUpdater is a type that allows a VTab to be inserted, updated, or
// deleted.
// See: https://sqlite.org/vtab.html#xupdate
type VTabUpdater interface {
Delete(interface{}) error
Insert(interface{}, []interface{}) (int64, error)
Update(interface{}, []interface{}) error
}
// VTabCursor describes cursors that point into the virtual table and are used
// to loop through the virtual table. See: http://sqlite.org/c3ref/vtab_cursor.html
type VTabCursor interface {
// http://sqlite.org/vtab.html#xclose
Close() error
// http://sqlite.org/vtab.html#xfilter
Filter(idxNum int, idxStr string, vals []interface{}) error
// http://sqlite.org/vtab.html#xnext
Next() error
// http://sqlite.org/vtab.html#xeof
EOF() bool
// http://sqlite.org/vtab.html#xcolumn
Column(c *SQLiteContext, col int) error
// http://sqlite.org/vtab.html#xrowid
Rowid() (int64, error)
}
// DeclareVTab declares the Schema of a virtual table.
// See: http://sqlite.org/c3ref/declare_vtab.html
func (c *SQLiteConn) DeclareVTab(sql string) error {
zSQL := C.CString(sql)
defer C.free(unsafe.Pointer(zSQL))
rv := C.sqlite3_declare_vtab(c.db, zSQL)
if rv != C.SQLITE_OK {
return c.lastError()
}
return nil
}
// CreateModule registers a virtual table implementation.
// See: http://sqlite.org/c3ref/create_module.html
func (c *SQLiteConn) CreateModule(moduleName string, module Module) error {
mname := C.CString(moduleName)
defer C.free(unsafe.Pointer(mname))
udm := sqliteModule{c, moduleName, module}
rv := C._sqlite3_create_module(c.db, mname, C.uintptr_t(uintptr(newHandle(c, &udm))))
if rv != C.SQLITE_OK {
return c.lastError()
}
return nil
}

17
vendor/github.com/mattn/go-sqlite3/sqlite3_other.go generated vendored Normal file
View file

@ -0,0 +1,17 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !windows
package sqlite3
/*
#cgo CFLAGS: -I.
#cgo linux LDFLAGS: -ldl
#cgo linux,ppc LDFLAGS: -lpthread
#cgo linux,ppc64 LDFLAGS: -lpthread
#cgo linux,ppc64le LDFLAGS: -lpthread
*/
import "C"

14
vendor/github.com/mattn/go-sqlite3/sqlite3_solaris.go generated vendored Normal file
View file

@ -0,0 +1,14 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build solaris
package sqlite3
/*
#cgo CFLAGS: -D__EXTENSIONS__=1
#cgo LDFLAGS: -lc
*/
import "C"

287
vendor/github.com/mattn/go-sqlite3/sqlite3_trace.go generated vendored Normal file
View file

@ -0,0 +1,287 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build sqlite_trace trace
package sqlite3
/*
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
#include <stdlib.h>
int traceCallbackTrampoline(unsigned int traceEventCode, void *ctx, void *p, void *x);
*/
import "C"
import (
"fmt"
"strings"
"sync"
"unsafe"
)
// Trace... constants identify the possible events causing callback invocation.
// Values are same as the corresponding SQLite Trace Event Codes.
const (
TraceStmt = uint32(C.SQLITE_TRACE_STMT)
TraceProfile = uint32(C.SQLITE_TRACE_PROFILE)
TraceRow = uint32(C.SQLITE_TRACE_ROW)
TraceClose = uint32(C.SQLITE_TRACE_CLOSE)
)
type TraceInfo struct {
// Pack together the shorter fields, to keep the struct smaller.
// On a 64-bit machine there would be padding
// between EventCode and ConnHandle; having AutoCommit here is "free":
EventCode uint32
AutoCommit bool
ConnHandle uintptr
// Usually filled, unless EventCode = TraceClose = SQLITE_TRACE_CLOSE:
// identifier for a prepared statement:
StmtHandle uintptr
// Two strings filled when EventCode = TraceStmt = SQLITE_TRACE_STMT:
// (1) either the unexpanded SQL text of the prepared statement, or
// an SQL comment that indicates the invocation of a trigger;
// (2) expanded SQL, if requested and if (1) is not an SQL comment.
StmtOrTrigger string
ExpandedSQL string // only if requested (TraceConfig.WantExpandedSQL = true)
// filled when EventCode = TraceProfile = SQLITE_TRACE_PROFILE:
// estimated number of nanoseconds that the prepared statement took to run:
RunTimeNanosec int64
DBError Error
}
// TraceUserCallback gives the signature for a trace function
// provided by the user (Go application programmer).
// SQLite 3.14 documentation (as of September 2, 2016)
// for SQL Trace Hook = sqlite3_trace_v2():
// The integer return value from the callback is currently ignored,
// though this may change in future releases. Callback implementations
// should return zero to ensure future compatibility.
type TraceUserCallback func(TraceInfo) int
type TraceConfig struct {
Callback TraceUserCallback
EventMask uint32
WantExpandedSQL bool
}
func fillDBError(dbErr *Error, db *C.sqlite3) {
// See SQLiteConn.lastError(), in file 'sqlite3.go' at the time of writing (Sept 5, 2016)
dbErr.Code = ErrNo(C.sqlite3_errcode(db))
dbErr.ExtendedCode = ErrNoExtended(C.sqlite3_extended_errcode(db))
dbErr.err = C.GoString(C.sqlite3_errmsg(db))
}
func fillExpandedSQL(info *TraceInfo, db *C.sqlite3, pStmt unsafe.Pointer) {
if pStmt == nil {
panic("No SQLite statement pointer in P arg of trace_v2 callback")
}
expSQLiteCStr := C.sqlite3_expanded_sql((*C.sqlite3_stmt)(pStmt))
defer C.sqlite3_free(unsafe.Pointer(expSQLiteCStr))
if expSQLiteCStr == nil {
fillDBError(&info.DBError, db)
return
}
info.ExpandedSQL = C.GoString(expSQLiteCStr)
}
//export traceCallbackTrampoline
func traceCallbackTrampoline(
traceEventCode C.uint,
// Parameter named 'C' in SQLite docs = Context given at registration:
ctx unsafe.Pointer,
// Parameter named 'P' in SQLite docs (Primary event data?):
p unsafe.Pointer,
// Parameter named 'X' in SQLite docs (eXtra event data?):
xValue unsafe.Pointer) C.int {
eventCode := uint32(traceEventCode)
if ctx == nil {
panic(fmt.Sprintf("No context (ev 0x%x)", traceEventCode))
}
contextDB := (*C.sqlite3)(ctx)
connHandle := uintptr(ctx)
var traceConf TraceConfig
var found bool
if eventCode == TraceClose {
// clean up traceMap: 'pop' means get and delete
traceConf, found = popTraceMapping(connHandle)
} else {
traceConf, found = lookupTraceMapping(connHandle)
}
if !found {
panic(fmt.Sprintf("Mapping not found for handle 0x%x (ev 0x%x)",
connHandle, eventCode))
}
var info TraceInfo
info.EventCode = eventCode
info.AutoCommit = (int(C.sqlite3_get_autocommit(contextDB)) != 0)
info.ConnHandle = connHandle
switch eventCode {
case TraceStmt:
info.StmtHandle = uintptr(p)
var xStr string
if xValue != nil {
xStr = C.GoString((*C.char)(xValue))
}
info.StmtOrTrigger = xStr
if !strings.HasPrefix(xStr, "--") {
// Not SQL comment, therefore the current event
// is not related to a trigger.
// The user might want to receive the expanded SQL;
// let's check:
if traceConf.WantExpandedSQL {
fillExpandedSQL(&info, contextDB, p)
}
}
case TraceProfile:
info.StmtHandle = uintptr(p)
if xValue == nil {
panic("NULL pointer in X arg of trace_v2 callback for SQLITE_TRACE_PROFILE event")
}
info.RunTimeNanosec = *(*int64)(xValue)
// sample the error //TODO: is it safe? is it useful?
fillDBError(&info.DBError, contextDB)
case TraceRow:
info.StmtHandle = uintptr(p)
case TraceClose:
handle := uintptr(p)
if handle != info.ConnHandle {
panic(fmt.Sprintf("Different conn handle 0x%x (expected 0x%x) in SQLITE_TRACE_CLOSE event.",
handle, info.ConnHandle))
}
default:
// Pass unsupported events to the user callback (if configured);
// let the user callback decide whether to panic or ignore them.
}
// Do not execute user callback when the event was not requested by user!
// Remember that the Close event is always selected when
// registering this callback trampoline with SQLite --- for cleanup.
// In the future there may be more events forced to "selected" in SQLite
// for the driver's needs.
if traceConf.EventMask&eventCode == 0 {
return 0
}
r := 0
if traceConf.Callback != nil {
r = traceConf.Callback(info)
}
return C.int(r)
}
type traceMapEntry struct {
config TraceConfig
}
var traceMapLock sync.Mutex
var traceMap = make(map[uintptr]traceMapEntry)
func addTraceMapping(connHandle uintptr, traceConf TraceConfig) {
traceMapLock.Lock()
defer traceMapLock.Unlock()
oldEntryCopy, found := traceMap[connHandle]
if found {
panic(fmt.Sprintf("Adding trace config %v: handle 0x%x already registered (%v).",
traceConf, connHandle, oldEntryCopy.config))
}
traceMap[connHandle] = traceMapEntry{config: traceConf}
}
func lookupTraceMapping(connHandle uintptr) (TraceConfig, bool) {
traceMapLock.Lock()
defer traceMapLock.Unlock()
entryCopy, found := traceMap[connHandle]
return entryCopy.config, found
}
// 'pop' = get and delete from map before returning the value to the caller
func popTraceMapping(connHandle uintptr) (TraceConfig, bool) {
traceMapLock.Lock()
defer traceMapLock.Unlock()
entryCopy, found := traceMap[connHandle]
if found {
delete(traceMap, connHandle)
}
return entryCopy.config, found
}
// SetTrace installs or removes the trace callback for the given database connection.
// It's not named 'RegisterTrace' because only one callback can be kept and called.
// Calling SetTrace a second time on same database connection
// overrides (cancels) any prior callback and all its settings:
// event mask, etc.
func (c *SQLiteConn) SetTrace(requested *TraceConfig) error {
connHandle := uintptr(unsafe.Pointer(c.db))
_, _ = popTraceMapping(connHandle)
if requested == nil {
// The traceMap entry was deleted already by popTraceMapping():
// can disable all events now, no need to watch for TraceClose.
err := c.setSQLiteTrace(0)
return err
}
reqCopy := *requested
// Disable potentially expensive operations
// if their result will not be used. We are doing this
// just in case the caller provided nonsensical input.
if reqCopy.EventMask&TraceStmt == 0 {
reqCopy.WantExpandedSQL = false
}
addTraceMapping(connHandle, reqCopy)
// The callback trampoline function does cleanup on Close event,
// regardless of the presence or absence of the user callback.
// Therefore it needs the Close event to be selected:
actualEventMask := uint(reqCopy.EventMask | TraceClose)
err := c.setSQLiteTrace(actualEventMask)
return err
}
func (c *SQLiteConn) setSQLiteTrace(sqliteEventMask uint) error {
rv := C.sqlite3_trace_v2(c.db,
C.uint(sqliteEventMask),
(*[0]byte)(unsafe.Pointer(C.traceCallbackTrampoline)),
unsafe.Pointer(c.db)) // Fourth arg is same as first: we are
// passing the database connection handle as callback context.
if rv != C.SQLITE_OK {
return c.lastError()
}
return nil
}

62
vendor/github.com/mattn/go-sqlite3/sqlite3_type.go generated vendored Normal file
View file

@ -0,0 +1,62 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
package sqlite3
/*
#ifndef USE_LIBSQLITE3
#include <sqlite3-binding.h>
#else
#include <sqlite3.h>
#endif
*/
import "C"
import (
"reflect"
"time"
)
// ColumnTypeDatabaseTypeName implement RowsColumnTypeDatabaseTypeName.
func (rc *SQLiteRows) ColumnTypeDatabaseTypeName(i int) string {
return C.GoString(C.sqlite3_column_decltype(rc.s.s, C.int(i)))
}
/*
func (rc *SQLiteRows) ColumnTypeLength(index int) (length int64, ok bool) {
return 0, false
}
func (rc *SQLiteRows) ColumnTypePrecisionScale(index int) (precision, scale int64, ok bool) {
return 0, 0, false
}
*/
// ColumnTypeNullable implement RowsColumnTypeNullable.
func (rc *SQLiteRows) ColumnTypeNullable(i int) (nullable, ok bool) {
return true, true
}
// ColumnTypeScanType implement RowsColumnTypeScanType.
func (rc *SQLiteRows) ColumnTypeScanType(i int) reflect.Type {
switch C.sqlite3_column_type(rc.s.s, C.int(i)) {
case C.SQLITE_INTEGER:
switch C.GoString(C.sqlite3_column_decltype(rc.s.s, C.int(i))) {
case "timestamp", "datetime", "date":
return reflect.TypeOf(time.Time{})
case "boolean":
return reflect.TypeOf(false)
}
return reflect.TypeOf(int64(0))
case C.SQLITE_FLOAT:
return reflect.TypeOf(float64(0))
case C.SQLITE_BLOB:
return reflect.SliceOf(reflect.TypeOf(byte(0)))
case C.SQLITE_NULL:
return reflect.TypeOf(nil)
case C.SQLITE_TEXT:
return reflect.TypeOf("")
}
return reflect.SliceOf(reflect.TypeOf(byte(0)))
}

View file

@ -0,0 +1,39 @@
// Copyright (C) 2018 G.J.R. Timmer <gjr.timmer@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build cgo
package sqlite3
// usleep is a function available on *nix based systems.
// This function is not present in Windows.
// Windows has a sleep function but this works with seconds
// and not with microseconds as usleep.
//
// This code should improve performance on windows because
// without the presence of usleep SQLite waits 1 second.
//
// Source: https://stackoverflow.com/questions/5801813/c-usleep-is-obsolete-workarounds-for-windows-mingw?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa
/*
#include <windows.h>
void usleep(__int64 usec)
{
HANDLE timer;
LARGE_INTEGER ft;
// Convert to 100 nanosecond interval, negative value indicates relative time
ft.QuadPart = -(10*usec);
timer = CreateWaitableTimer(NULL, TRUE, NULL);
SetWaitableTimer(timer, &ft, 0, NULL, NULL, 0);
WaitForSingleObject(timer, INFINITE);
CloseHandle(timer);
}
*/
import "C"
// EOF

18
vendor/github.com/mattn/go-sqlite3/sqlite3_windows.go generated vendored Normal file
View file

@ -0,0 +1,18 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build windows
package sqlite3
/*
#cgo CFLAGS: -I.
#cgo CFLAGS: -fno-stack-check
#cgo CFLAGS: -fno-stack-protector
#cgo CFLAGS: -mno-stack-arg-probe
#cgo LDFLAGS: -lmingwex -lmingw32
#cgo windows,386 CFLAGS: -D_USE_32BIT_TIME_T
*/
import "C"

664
vendor/github.com/mattn/go-sqlite3/sqlite3ext.h generated vendored Normal file
View file

@ -0,0 +1,664 @@
#ifndef USE_LIBSQLITE3
/*
** 2006 June 7
**
** The author disclaims copyright to this source code. In place of
** a legal notice, here is a blessing:
**
** May you do good and not evil.
** May you find forgiveness for yourself and forgive others.
** May you share freely, never taking more than you give.
**
*************************************************************************
** This header file defines the SQLite interface for use by
** shared libraries that want to be imported as extensions into
** an SQLite instance. Shared libraries that intend to be loaded
** as extensions by SQLite should #include this file instead of
** sqlite3.h.
*/
#ifndef SQLITE3EXT_H
#define SQLITE3EXT_H
#include "sqlite3-binding.h"
/*
** The following structure holds pointers to all of the SQLite API
** routines.
**
** WARNING: In order to maintain backwards compatibility, add new
** interfaces to the end of this structure only. If you insert new
** interfaces in the middle of this structure, then older different
** versions of SQLite will not be able to load each other's shared
** libraries!
*/
struct sqlite3_api_routines {
void * (*aggregate_context)(sqlite3_context*,int nBytes);
int (*aggregate_count)(sqlite3_context*);
int (*bind_blob)(sqlite3_stmt*,int,const void*,int n,void(*)(void*));
int (*bind_double)(sqlite3_stmt*,int,double);
int (*bind_int)(sqlite3_stmt*,int,int);
int (*bind_int64)(sqlite3_stmt*,int,sqlite_int64);
int (*bind_null)(sqlite3_stmt*,int);
int (*bind_parameter_count)(sqlite3_stmt*);
int (*bind_parameter_index)(sqlite3_stmt*,const char*zName);
const char * (*bind_parameter_name)(sqlite3_stmt*,int);
int (*bind_text)(sqlite3_stmt*,int,const char*,int n,void(*)(void*));
int (*bind_text16)(sqlite3_stmt*,int,const void*,int,void(*)(void*));
int (*bind_value)(sqlite3_stmt*,int,const sqlite3_value*);
int (*busy_handler)(sqlite3*,int(*)(void*,int),void*);
int (*busy_timeout)(sqlite3*,int ms);
int (*changes)(sqlite3*);
int (*close)(sqlite3*);
int (*collation_needed)(sqlite3*,void*,void(*)(void*,sqlite3*,
int eTextRep,const char*));
int (*collation_needed16)(sqlite3*,void*,void(*)(void*,sqlite3*,
int eTextRep,const void*));
const void * (*column_blob)(sqlite3_stmt*,int iCol);
int (*column_bytes)(sqlite3_stmt*,int iCol);
int (*column_bytes16)(sqlite3_stmt*,int iCol);
int (*column_count)(sqlite3_stmt*pStmt);
const char * (*column_database_name)(sqlite3_stmt*,int);
const void * (*column_database_name16)(sqlite3_stmt*,int);
const char * (*column_decltype)(sqlite3_stmt*,int i);
const void * (*column_decltype16)(sqlite3_stmt*,int);
double (*column_double)(sqlite3_stmt*,int iCol);
int (*column_int)(sqlite3_stmt*,int iCol);
sqlite_int64 (*column_int64)(sqlite3_stmt*,int iCol);
const char * (*column_name)(sqlite3_stmt*,int);
const void * (*column_name16)(sqlite3_stmt*,int);
const char * (*column_origin_name)(sqlite3_stmt*,int);
const void * (*column_origin_name16)(sqlite3_stmt*,int);
const char * (*column_table_name)(sqlite3_stmt*,int);
const void * (*column_table_name16)(sqlite3_stmt*,int);
const unsigned char * (*column_text)(sqlite3_stmt*,int iCol);
const void * (*column_text16)(sqlite3_stmt*,int iCol);
int (*column_type)(sqlite3_stmt*,int iCol);
sqlite3_value* (*column_value)(sqlite3_stmt*,int iCol);
void * (*commit_hook)(sqlite3*,int(*)(void*),void*);
int (*complete)(const char*sql);
int (*complete16)(const void*sql);
int (*create_collation)(sqlite3*,const char*,int,void*,
int(*)(void*,int,const void*,int,const void*));
int (*create_collation16)(sqlite3*,const void*,int,void*,
int(*)(void*,int,const void*,int,const void*));
int (*create_function)(sqlite3*,const char*,int,int,void*,
void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*));
int (*create_function16)(sqlite3*,const void*,int,int,void*,
void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*));
int (*create_module)(sqlite3*,const char*,const sqlite3_module*,void*);
int (*data_count)(sqlite3_stmt*pStmt);
sqlite3 * (*db_handle)(sqlite3_stmt*);
int (*declare_vtab)(sqlite3*,const char*);
int (*enable_shared_cache)(int);
int (*errcode)(sqlite3*db);
const char * (*errmsg)(sqlite3*);
const void * (*errmsg16)(sqlite3*);
int (*exec)(sqlite3*,const char*,sqlite3_callback,void*,char**);
int (*expired)(sqlite3_stmt*);
int (*finalize)(sqlite3_stmt*pStmt);
void (*free)(void*);
void (*free_table)(char**result);
int (*get_autocommit)(sqlite3*);
void * (*get_auxdata)(sqlite3_context*,int);
int (*get_table)(sqlite3*,const char*,char***,int*,int*,char**);
int (*global_recover)(void);
void (*interruptx)(sqlite3*);
sqlite_int64 (*last_insert_rowid)(sqlite3*);
const char * (*libversion)(void);
int (*libversion_number)(void);
void *(*malloc)(int);
char * (*mprintf)(const char*,...);
int (*open)(const char*,sqlite3**);
int (*open16)(const void*,sqlite3**);
int (*prepare)(sqlite3*,const char*,int,sqlite3_stmt**,const char**);
int (*prepare16)(sqlite3*,const void*,int,sqlite3_stmt**,const void**);
void * (*profile)(sqlite3*,void(*)(void*,const char*,sqlite_uint64),void*);
void (*progress_handler)(sqlite3*,int,int(*)(void*),void*);
void *(*realloc)(void*,int);
int (*reset)(sqlite3_stmt*pStmt);
void (*result_blob)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_double)(sqlite3_context*,double);
void (*result_error)(sqlite3_context*,const char*,int);
void (*result_error16)(sqlite3_context*,const void*,int);
void (*result_int)(sqlite3_context*,int);
void (*result_int64)(sqlite3_context*,sqlite_int64);
void (*result_null)(sqlite3_context*);
void (*result_text)(sqlite3_context*,const char*,int,void(*)(void*));
void (*result_text16)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_text16be)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_text16le)(sqlite3_context*,const void*,int,void(*)(void*));
void (*result_value)(sqlite3_context*,sqlite3_value*);
void * (*rollback_hook)(sqlite3*,void(*)(void*),void*);
int (*set_authorizer)(sqlite3*,int(*)(void*,int,const char*,const char*,
const char*,const char*),void*);
void (*set_auxdata)(sqlite3_context*,int,void*,void (*)(void*));
char * (*xsnprintf)(int,char*,const char*,...);
int (*step)(sqlite3_stmt*);
int (*table_column_metadata)(sqlite3*,const char*,const char*,const char*,
char const**,char const**,int*,int*,int*);
void (*thread_cleanup)(void);
int (*total_changes)(sqlite3*);
void * (*trace)(sqlite3*,void(*xTrace)(void*,const char*),void*);
int (*transfer_bindings)(sqlite3_stmt*,sqlite3_stmt*);
void * (*update_hook)(sqlite3*,void(*)(void*,int ,char const*,char const*,
sqlite_int64),void*);
void * (*user_data)(sqlite3_context*);
const void * (*value_blob)(sqlite3_value*);
int (*value_bytes)(sqlite3_value*);
int (*value_bytes16)(sqlite3_value*);
double (*value_double)(sqlite3_value*);
int (*value_int)(sqlite3_value*);
sqlite_int64 (*value_int64)(sqlite3_value*);
int (*value_numeric_type)(sqlite3_value*);
const unsigned char * (*value_text)(sqlite3_value*);
const void * (*value_text16)(sqlite3_value*);
const void * (*value_text16be)(sqlite3_value*);
const void * (*value_text16le)(sqlite3_value*);
int (*value_type)(sqlite3_value*);
char *(*vmprintf)(const char*,va_list);
/* Added ??? */
int (*overload_function)(sqlite3*, const char *zFuncName, int nArg);
/* Added by 3.3.13 */
int (*prepare_v2)(sqlite3*,const char*,int,sqlite3_stmt**,const char**);
int (*prepare16_v2)(sqlite3*,const void*,int,sqlite3_stmt**,const void**);
int (*clear_bindings)(sqlite3_stmt*);
/* Added by 3.4.1 */
int (*create_module_v2)(sqlite3*,const char*,const sqlite3_module*,void*,
void (*xDestroy)(void *));
/* Added by 3.5.0 */
int (*bind_zeroblob)(sqlite3_stmt*,int,int);
int (*blob_bytes)(sqlite3_blob*);
int (*blob_close)(sqlite3_blob*);
int (*blob_open)(sqlite3*,const char*,const char*,const char*,sqlite3_int64,
int,sqlite3_blob**);
int (*blob_read)(sqlite3_blob*,void*,int,int);
int (*blob_write)(sqlite3_blob*,const void*,int,int);
int (*create_collation_v2)(sqlite3*,const char*,int,void*,
int(*)(void*,int,const void*,int,const void*),
void(*)(void*));
int (*file_control)(sqlite3*,const char*,int,void*);
sqlite3_int64 (*memory_highwater)(int);
sqlite3_int64 (*memory_used)(void);
sqlite3_mutex *(*mutex_alloc)(int);
void (*mutex_enter)(sqlite3_mutex*);
void (*mutex_free)(sqlite3_mutex*);
void (*mutex_leave)(sqlite3_mutex*);
int (*mutex_try)(sqlite3_mutex*);
int (*open_v2)(const char*,sqlite3**,int,const char*);
int (*release_memory)(int);
void (*result_error_nomem)(sqlite3_context*);
void (*result_error_toobig)(sqlite3_context*);
int (*sleep)(int);
void (*soft_heap_limit)(int);
sqlite3_vfs *(*vfs_find)(const char*);
int (*vfs_register)(sqlite3_vfs*,int);
int (*vfs_unregister)(sqlite3_vfs*);
int (*xthreadsafe)(void);
void (*result_zeroblob)(sqlite3_context*,int);
void (*result_error_code)(sqlite3_context*,int);
int (*test_control)(int, ...);
void (*randomness)(int,void*);
sqlite3 *(*context_db_handle)(sqlite3_context*);
int (*extended_result_codes)(sqlite3*,int);
int (*limit)(sqlite3*,int,int);
sqlite3_stmt *(*next_stmt)(sqlite3*,sqlite3_stmt*);
const char *(*sql)(sqlite3_stmt*);
int (*status)(int,int*,int*,int);
int (*backup_finish)(sqlite3_backup*);
sqlite3_backup *(*backup_init)(sqlite3*,const char*,sqlite3*,const char*);
int (*backup_pagecount)(sqlite3_backup*);
int (*backup_remaining)(sqlite3_backup*);
int (*backup_step)(sqlite3_backup*,int);
const char *(*compileoption_get)(int);
int (*compileoption_used)(const char*);
int (*create_function_v2)(sqlite3*,const char*,int,int,void*,
void (*xFunc)(sqlite3_context*,int,sqlite3_value**),
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*),
void(*xDestroy)(void*));
int (*db_config)(sqlite3*,int,...);
sqlite3_mutex *(*db_mutex)(sqlite3*);
int (*db_status)(sqlite3*,int,int*,int*,int);
int (*extended_errcode)(sqlite3*);
void (*log)(int,const char*,...);
sqlite3_int64 (*soft_heap_limit64)(sqlite3_int64);
const char *(*sourceid)(void);
int (*stmt_status)(sqlite3_stmt*,int,int);
int (*strnicmp)(const char*,const char*,int);
int (*unlock_notify)(sqlite3*,void(*)(void**,int),void*);
int (*wal_autocheckpoint)(sqlite3*,int);
int (*wal_checkpoint)(sqlite3*,const char*);
void *(*wal_hook)(sqlite3*,int(*)(void*,sqlite3*,const char*,int),void*);
int (*blob_reopen)(sqlite3_blob*,sqlite3_int64);
int (*vtab_config)(sqlite3*,int op,...);
int (*vtab_on_conflict)(sqlite3*);
/* Version 3.7.16 and later */
int (*close_v2)(sqlite3*);
const char *(*db_filename)(sqlite3*,const char*);
int (*db_readonly)(sqlite3*,const char*);
int (*db_release_memory)(sqlite3*);
const char *(*errstr)(int);
int (*stmt_busy)(sqlite3_stmt*);
int (*stmt_readonly)(sqlite3_stmt*);
int (*stricmp)(const char*,const char*);
int (*uri_boolean)(const char*,const char*,int);
sqlite3_int64 (*uri_int64)(const char*,const char*,sqlite3_int64);
const char *(*uri_parameter)(const char*,const char*);
char *(*xvsnprintf)(int,char*,const char*,va_list);
int (*wal_checkpoint_v2)(sqlite3*,const char*,int,int*,int*);
/* Version 3.8.7 and later */
int (*auto_extension)(void(*)(void));
int (*bind_blob64)(sqlite3_stmt*,int,const void*,sqlite3_uint64,
void(*)(void*));
int (*bind_text64)(sqlite3_stmt*,int,const char*,sqlite3_uint64,
void(*)(void*),unsigned char);
int (*cancel_auto_extension)(void(*)(void));
int (*load_extension)(sqlite3*,const char*,const char*,char**);
void *(*malloc64)(sqlite3_uint64);
sqlite3_uint64 (*msize)(void*);
void *(*realloc64)(void*,sqlite3_uint64);
void (*reset_auto_extension)(void);
void (*result_blob64)(sqlite3_context*,const void*,sqlite3_uint64,
void(*)(void*));
void (*result_text64)(sqlite3_context*,const char*,sqlite3_uint64,
void(*)(void*), unsigned char);
int (*strglob)(const char*,const char*);
/* Version 3.8.11 and later */
sqlite3_value *(*value_dup)(const sqlite3_value*);
void (*value_free)(sqlite3_value*);
int (*result_zeroblob64)(sqlite3_context*,sqlite3_uint64);
int (*bind_zeroblob64)(sqlite3_stmt*, int, sqlite3_uint64);
/* Version 3.9.0 and later */
unsigned int (*value_subtype)(sqlite3_value*);
void (*result_subtype)(sqlite3_context*,unsigned int);
/* Version 3.10.0 and later */
int (*status64)(int,sqlite3_int64*,sqlite3_int64*,int);
int (*strlike)(const char*,const char*,unsigned int);
int (*db_cacheflush)(sqlite3*);
/* Version 3.12.0 and later */
int (*system_errno)(sqlite3*);
/* Version 3.14.0 and later */
int (*trace_v2)(sqlite3*,unsigned,int(*)(unsigned,void*,void*,void*),void*);
char *(*expanded_sql)(sqlite3_stmt*);
/* Version 3.18.0 and later */
void (*set_last_insert_rowid)(sqlite3*,sqlite3_int64);
/* Version 3.20.0 and later */
int (*prepare_v3)(sqlite3*,const char*,int,unsigned int,
sqlite3_stmt**,const char**);
int (*prepare16_v3)(sqlite3*,const void*,int,unsigned int,
sqlite3_stmt**,const void**);
int (*bind_pointer)(sqlite3_stmt*,int,void*,const char*,void(*)(void*));
void (*result_pointer)(sqlite3_context*,void*,const char*,void(*)(void*));
void *(*value_pointer)(sqlite3_value*,const char*);
int (*vtab_nochange)(sqlite3_context*);
int (*value_nochange)(sqlite3_value*);
const char *(*vtab_collation)(sqlite3_index_info*,int);
/* Version 3.24.0 and later */
int (*keyword_count)(void);
int (*keyword_name)(int,const char**,int*);
int (*keyword_check)(const char*,int);
sqlite3_str *(*str_new)(sqlite3*);
char *(*str_finish)(sqlite3_str*);
void (*str_appendf)(sqlite3_str*, const char *zFormat, ...);
void (*str_vappendf)(sqlite3_str*, const char *zFormat, va_list);
void (*str_append)(sqlite3_str*, const char *zIn, int N);
void (*str_appendall)(sqlite3_str*, const char *zIn);
void (*str_appendchar)(sqlite3_str*, int N, char C);
void (*str_reset)(sqlite3_str*);
int (*str_errcode)(sqlite3_str*);
int (*str_length)(sqlite3_str*);
char *(*str_value)(sqlite3_str*);
/* Version 3.25.0 and later */
int (*create_window_function)(sqlite3*,const char*,int,int,void*,
void (*xStep)(sqlite3_context*,int,sqlite3_value**),
void (*xFinal)(sqlite3_context*),
void (*xValue)(sqlite3_context*),
void (*xInv)(sqlite3_context*,int,sqlite3_value**),
void(*xDestroy)(void*));
/* Version 3.26.0 and later */
const char *(*normalized_sql)(sqlite3_stmt*);
/* Version 3.28.0 and later */
int (*stmt_isexplain)(sqlite3_stmt*);
int (*value_frombind)(sqlite3_value*);
/* Version 3.30.0 and later */
int (*drop_modules)(sqlite3*,const char**);
/* Version 3.31.0 and later */
sqlite3_int64 (*hard_heap_limit64)(sqlite3_int64);
const char *(*uri_key)(const char*,int);
const char *(*filename_database)(const char*);
const char *(*filename_journal)(const char*);
const char *(*filename_wal)(const char*);
/* Version 3.32.0 and later */
char *(*create_filename)(const char*,const char*,const char*,
int,const char**);
void (*free_filename)(char*);
sqlite3_file *(*database_file_object)(const char*);
};
/*
** This is the function signature used for all extension entry points. It
** is also defined in the file "loadext.c".
*/
typedef int (*sqlite3_loadext_entry)(
sqlite3 *db, /* Handle to the database. */
char **pzErrMsg, /* Used to set error string on failure. */
const sqlite3_api_routines *pThunk /* Extension API function pointers. */
);
/*
** The following macros redefine the API routines so that they are
** redirected through the global sqlite3_api structure.
**
** This header file is also used by the loadext.c source file
** (part of the main SQLite library - not an extension) so that
** it can get access to the sqlite3_api_routines structure
** definition. But the main library does not want to redefine
** the API. So the redefinition macros are only valid if the
** SQLITE_CORE macros is undefined.
*/
#if !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION)
#define sqlite3_aggregate_context sqlite3_api->aggregate_context
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_aggregate_count sqlite3_api->aggregate_count
#endif
#define sqlite3_bind_blob sqlite3_api->bind_blob
#define sqlite3_bind_double sqlite3_api->bind_double
#define sqlite3_bind_int sqlite3_api->bind_int
#define sqlite3_bind_int64 sqlite3_api->bind_int64
#define sqlite3_bind_null sqlite3_api->bind_null
#define sqlite3_bind_parameter_count sqlite3_api->bind_parameter_count
#define sqlite3_bind_parameter_index sqlite3_api->bind_parameter_index
#define sqlite3_bind_parameter_name sqlite3_api->bind_parameter_name
#define sqlite3_bind_text sqlite3_api->bind_text
#define sqlite3_bind_text16 sqlite3_api->bind_text16
#define sqlite3_bind_value sqlite3_api->bind_value
#define sqlite3_busy_handler sqlite3_api->busy_handler
#define sqlite3_busy_timeout sqlite3_api->busy_timeout
#define sqlite3_changes sqlite3_api->changes
#define sqlite3_close sqlite3_api->close
#define sqlite3_collation_needed sqlite3_api->collation_needed
#define sqlite3_collation_needed16 sqlite3_api->collation_needed16
#define sqlite3_column_blob sqlite3_api->column_blob
#define sqlite3_column_bytes sqlite3_api->column_bytes
#define sqlite3_column_bytes16 sqlite3_api->column_bytes16
#define sqlite3_column_count sqlite3_api->column_count
#define sqlite3_column_database_name sqlite3_api->column_database_name
#define sqlite3_column_database_name16 sqlite3_api->column_database_name16
#define sqlite3_column_decltype sqlite3_api->column_decltype
#define sqlite3_column_decltype16 sqlite3_api->column_decltype16
#define sqlite3_column_double sqlite3_api->column_double
#define sqlite3_column_int sqlite3_api->column_int
#define sqlite3_column_int64 sqlite3_api->column_int64
#define sqlite3_column_name sqlite3_api->column_name
#define sqlite3_column_name16 sqlite3_api->column_name16
#define sqlite3_column_origin_name sqlite3_api->column_origin_name
#define sqlite3_column_origin_name16 sqlite3_api->column_origin_name16
#define sqlite3_column_table_name sqlite3_api->column_table_name
#define sqlite3_column_table_name16 sqlite3_api->column_table_name16
#define sqlite3_column_text sqlite3_api->column_text
#define sqlite3_column_text16 sqlite3_api->column_text16
#define sqlite3_column_type sqlite3_api->column_type
#define sqlite3_column_value sqlite3_api->column_value
#define sqlite3_commit_hook sqlite3_api->commit_hook
#define sqlite3_complete sqlite3_api->complete
#define sqlite3_complete16 sqlite3_api->complete16
#define sqlite3_create_collation sqlite3_api->create_collation
#define sqlite3_create_collation16 sqlite3_api->create_collation16
#define sqlite3_create_function sqlite3_api->create_function
#define sqlite3_create_function16 sqlite3_api->create_function16
#define sqlite3_create_module sqlite3_api->create_module
#define sqlite3_create_module_v2 sqlite3_api->create_module_v2
#define sqlite3_data_count sqlite3_api->data_count
#define sqlite3_db_handle sqlite3_api->db_handle
#define sqlite3_declare_vtab sqlite3_api->declare_vtab
#define sqlite3_enable_shared_cache sqlite3_api->enable_shared_cache
#define sqlite3_errcode sqlite3_api->errcode
#define sqlite3_errmsg sqlite3_api->errmsg
#define sqlite3_errmsg16 sqlite3_api->errmsg16
#define sqlite3_exec sqlite3_api->exec
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_expired sqlite3_api->expired
#endif
#define sqlite3_finalize sqlite3_api->finalize
#define sqlite3_free sqlite3_api->free
#define sqlite3_free_table sqlite3_api->free_table
#define sqlite3_get_autocommit sqlite3_api->get_autocommit
#define sqlite3_get_auxdata sqlite3_api->get_auxdata
#define sqlite3_get_table sqlite3_api->get_table
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_global_recover sqlite3_api->global_recover
#endif
#define sqlite3_interrupt sqlite3_api->interruptx
#define sqlite3_last_insert_rowid sqlite3_api->last_insert_rowid
#define sqlite3_libversion sqlite3_api->libversion
#define sqlite3_libversion_number sqlite3_api->libversion_number
#define sqlite3_malloc sqlite3_api->malloc
#define sqlite3_mprintf sqlite3_api->mprintf
#define sqlite3_open sqlite3_api->open
#define sqlite3_open16 sqlite3_api->open16
#define sqlite3_prepare sqlite3_api->prepare
#define sqlite3_prepare16 sqlite3_api->prepare16
#define sqlite3_prepare_v2 sqlite3_api->prepare_v2
#define sqlite3_prepare16_v2 sqlite3_api->prepare16_v2
#define sqlite3_profile sqlite3_api->profile
#define sqlite3_progress_handler sqlite3_api->progress_handler
#define sqlite3_realloc sqlite3_api->realloc
#define sqlite3_reset sqlite3_api->reset
#define sqlite3_result_blob sqlite3_api->result_blob
#define sqlite3_result_double sqlite3_api->result_double
#define sqlite3_result_error sqlite3_api->result_error
#define sqlite3_result_error16 sqlite3_api->result_error16
#define sqlite3_result_int sqlite3_api->result_int
#define sqlite3_result_int64 sqlite3_api->result_int64
#define sqlite3_result_null sqlite3_api->result_null
#define sqlite3_result_text sqlite3_api->result_text
#define sqlite3_result_text16 sqlite3_api->result_text16
#define sqlite3_result_text16be sqlite3_api->result_text16be
#define sqlite3_result_text16le sqlite3_api->result_text16le
#define sqlite3_result_value sqlite3_api->result_value
#define sqlite3_rollback_hook sqlite3_api->rollback_hook
#define sqlite3_set_authorizer sqlite3_api->set_authorizer
#define sqlite3_set_auxdata sqlite3_api->set_auxdata
#define sqlite3_snprintf sqlite3_api->xsnprintf
#define sqlite3_step sqlite3_api->step
#define sqlite3_table_column_metadata sqlite3_api->table_column_metadata
#define sqlite3_thread_cleanup sqlite3_api->thread_cleanup
#define sqlite3_total_changes sqlite3_api->total_changes
#define sqlite3_trace sqlite3_api->trace
#ifndef SQLITE_OMIT_DEPRECATED
#define sqlite3_transfer_bindings sqlite3_api->transfer_bindings
#endif
#define sqlite3_update_hook sqlite3_api->update_hook
#define sqlite3_user_data sqlite3_api->user_data
#define sqlite3_value_blob sqlite3_api->value_blob
#define sqlite3_value_bytes sqlite3_api->value_bytes
#define sqlite3_value_bytes16 sqlite3_api->value_bytes16
#define sqlite3_value_double sqlite3_api->value_double
#define sqlite3_value_int sqlite3_api->value_int
#define sqlite3_value_int64 sqlite3_api->value_int64
#define sqlite3_value_numeric_type sqlite3_api->value_numeric_type
#define sqlite3_value_text sqlite3_api->value_text
#define sqlite3_value_text16 sqlite3_api->value_text16
#define sqlite3_value_text16be sqlite3_api->value_text16be
#define sqlite3_value_text16le sqlite3_api->value_text16le
#define sqlite3_value_type sqlite3_api->value_type
#define sqlite3_vmprintf sqlite3_api->vmprintf
#define sqlite3_vsnprintf sqlite3_api->xvsnprintf
#define sqlite3_overload_function sqlite3_api->overload_function
#define sqlite3_prepare_v2 sqlite3_api->prepare_v2
#define sqlite3_prepare16_v2 sqlite3_api->prepare16_v2
#define sqlite3_clear_bindings sqlite3_api->clear_bindings
#define sqlite3_bind_zeroblob sqlite3_api->bind_zeroblob
#define sqlite3_blob_bytes sqlite3_api->blob_bytes
#define sqlite3_blob_close sqlite3_api->blob_close
#define sqlite3_blob_open sqlite3_api->blob_open
#define sqlite3_blob_read sqlite3_api->blob_read
#define sqlite3_blob_write sqlite3_api->blob_write
#define sqlite3_create_collation_v2 sqlite3_api->create_collation_v2
#define sqlite3_file_control sqlite3_api->file_control
#define sqlite3_memory_highwater sqlite3_api->memory_highwater
#define sqlite3_memory_used sqlite3_api->memory_used
#define sqlite3_mutex_alloc sqlite3_api->mutex_alloc
#define sqlite3_mutex_enter sqlite3_api->mutex_enter
#define sqlite3_mutex_free sqlite3_api->mutex_free
#define sqlite3_mutex_leave sqlite3_api->mutex_leave
#define sqlite3_mutex_try sqlite3_api->mutex_try
#define sqlite3_open_v2 sqlite3_api->open_v2
#define sqlite3_release_memory sqlite3_api->release_memory
#define sqlite3_result_error_nomem sqlite3_api->result_error_nomem
#define sqlite3_result_error_toobig sqlite3_api->result_error_toobig
#define sqlite3_sleep sqlite3_api->sleep
#define sqlite3_soft_heap_limit sqlite3_api->soft_heap_limit
#define sqlite3_vfs_find sqlite3_api->vfs_find
#define sqlite3_vfs_register sqlite3_api->vfs_register
#define sqlite3_vfs_unregister sqlite3_api->vfs_unregister
#define sqlite3_threadsafe sqlite3_api->xthreadsafe
#define sqlite3_result_zeroblob sqlite3_api->result_zeroblob
#define sqlite3_result_error_code sqlite3_api->result_error_code
#define sqlite3_test_control sqlite3_api->test_control
#define sqlite3_randomness sqlite3_api->randomness
#define sqlite3_context_db_handle sqlite3_api->context_db_handle
#define sqlite3_extended_result_codes sqlite3_api->extended_result_codes
#define sqlite3_limit sqlite3_api->limit
#define sqlite3_next_stmt sqlite3_api->next_stmt
#define sqlite3_sql sqlite3_api->sql
#define sqlite3_status sqlite3_api->status
#define sqlite3_backup_finish sqlite3_api->backup_finish
#define sqlite3_backup_init sqlite3_api->backup_init
#define sqlite3_backup_pagecount sqlite3_api->backup_pagecount
#define sqlite3_backup_remaining sqlite3_api->backup_remaining
#define sqlite3_backup_step sqlite3_api->backup_step
#define sqlite3_compileoption_get sqlite3_api->compileoption_get
#define sqlite3_compileoption_used sqlite3_api->compileoption_used
#define sqlite3_create_function_v2 sqlite3_api->create_function_v2
#define sqlite3_db_config sqlite3_api->db_config
#define sqlite3_db_mutex sqlite3_api->db_mutex
#define sqlite3_db_status sqlite3_api->db_status
#define sqlite3_extended_errcode sqlite3_api->extended_errcode
#define sqlite3_log sqlite3_api->log
#define sqlite3_soft_heap_limit64 sqlite3_api->soft_heap_limit64
#define sqlite3_sourceid sqlite3_api->sourceid
#define sqlite3_stmt_status sqlite3_api->stmt_status
#define sqlite3_strnicmp sqlite3_api->strnicmp
#define sqlite3_unlock_notify sqlite3_api->unlock_notify
#define sqlite3_wal_autocheckpoint sqlite3_api->wal_autocheckpoint
#define sqlite3_wal_checkpoint sqlite3_api->wal_checkpoint
#define sqlite3_wal_hook sqlite3_api->wal_hook
#define sqlite3_blob_reopen sqlite3_api->blob_reopen
#define sqlite3_vtab_config sqlite3_api->vtab_config
#define sqlite3_vtab_on_conflict sqlite3_api->vtab_on_conflict
/* Version 3.7.16 and later */
#define sqlite3_close_v2 sqlite3_api->close_v2
#define sqlite3_db_filename sqlite3_api->db_filename
#define sqlite3_db_readonly sqlite3_api->db_readonly
#define sqlite3_db_release_memory sqlite3_api->db_release_memory
#define sqlite3_errstr sqlite3_api->errstr
#define sqlite3_stmt_busy sqlite3_api->stmt_busy
#define sqlite3_stmt_readonly sqlite3_api->stmt_readonly
#define sqlite3_stricmp sqlite3_api->stricmp
#define sqlite3_uri_boolean sqlite3_api->uri_boolean
#define sqlite3_uri_int64 sqlite3_api->uri_int64
#define sqlite3_uri_parameter sqlite3_api->uri_parameter
#define sqlite3_uri_vsnprintf sqlite3_api->xvsnprintf
#define sqlite3_wal_checkpoint_v2 sqlite3_api->wal_checkpoint_v2
/* Version 3.8.7 and later */
#define sqlite3_auto_extension sqlite3_api->auto_extension
#define sqlite3_bind_blob64 sqlite3_api->bind_blob64
#define sqlite3_bind_text64 sqlite3_api->bind_text64
#define sqlite3_cancel_auto_extension sqlite3_api->cancel_auto_extension
#define sqlite3_load_extension sqlite3_api->load_extension
#define sqlite3_malloc64 sqlite3_api->malloc64
#define sqlite3_msize sqlite3_api->msize
#define sqlite3_realloc64 sqlite3_api->realloc64
#define sqlite3_reset_auto_extension sqlite3_api->reset_auto_extension
#define sqlite3_result_blob64 sqlite3_api->result_blob64
#define sqlite3_result_text64 sqlite3_api->result_text64
#define sqlite3_strglob sqlite3_api->strglob
/* Version 3.8.11 and later */
#define sqlite3_value_dup sqlite3_api->value_dup
#define sqlite3_value_free sqlite3_api->value_free
#define sqlite3_result_zeroblob64 sqlite3_api->result_zeroblob64
#define sqlite3_bind_zeroblob64 sqlite3_api->bind_zeroblob64
/* Version 3.9.0 and later */
#define sqlite3_value_subtype sqlite3_api->value_subtype
#define sqlite3_result_subtype sqlite3_api->result_subtype
/* Version 3.10.0 and later */
#define sqlite3_status64 sqlite3_api->status64
#define sqlite3_strlike sqlite3_api->strlike
#define sqlite3_db_cacheflush sqlite3_api->db_cacheflush
/* Version 3.12.0 and later */
#define sqlite3_system_errno sqlite3_api->system_errno
/* Version 3.14.0 and later */
#define sqlite3_trace_v2 sqlite3_api->trace_v2
#define sqlite3_expanded_sql sqlite3_api->expanded_sql
/* Version 3.18.0 and later */
#define sqlite3_set_last_insert_rowid sqlite3_api->set_last_insert_rowid
/* Version 3.20.0 and later */
#define sqlite3_prepare_v3 sqlite3_api->prepare_v3
#define sqlite3_prepare16_v3 sqlite3_api->prepare16_v3
#define sqlite3_bind_pointer sqlite3_api->bind_pointer
#define sqlite3_result_pointer sqlite3_api->result_pointer
#define sqlite3_value_pointer sqlite3_api->value_pointer
/* Version 3.22.0 and later */
#define sqlite3_vtab_nochange sqlite3_api->vtab_nochange
#define sqlite3_value_nochange sqlite3_api->value_nochange
#define sqlite3_vtab_collation sqlite3_api->vtab_collation
/* Version 3.24.0 and later */
#define sqlite3_keyword_count sqlite3_api->keyword_count
#define sqlite3_keyword_name sqlite3_api->keyword_name
#define sqlite3_keyword_check sqlite3_api->keyword_check
#define sqlite3_str_new sqlite3_api->str_new
#define sqlite3_str_finish sqlite3_api->str_finish
#define sqlite3_str_appendf sqlite3_api->str_appendf
#define sqlite3_str_vappendf sqlite3_api->str_vappendf
#define sqlite3_str_append sqlite3_api->str_append
#define sqlite3_str_appendall sqlite3_api->str_appendall
#define sqlite3_str_appendchar sqlite3_api->str_appendchar
#define sqlite3_str_reset sqlite3_api->str_reset
#define sqlite3_str_errcode sqlite3_api->str_errcode
#define sqlite3_str_length sqlite3_api->str_length
#define sqlite3_str_value sqlite3_api->str_value
/* Version 3.25.0 and later */
#define sqlite3_create_window_function sqlite3_api->create_window_function
/* Version 3.26.0 and later */
#define sqlite3_normalized_sql sqlite3_api->normalized_sql
/* Version 3.28.0 and later */
#define sqlite3_stmt_isexplain sqlite3_api->stmt_isexplain
#define sqlite3_value_frombind sqlite3_api->value_frombind
/* Version 3.30.0 and later */
#define sqlite3_drop_modules sqlite3_api->drop_modules
/* Version 3.31.0 and later */
#define sqlite3_hard_heap_limit64 sqlite3_api->hard_heap_limit64
#define sqlite3_uri_key sqlite3_api->uri_key
#define sqlite3_filename_database sqlite3_api->filename_database
#define sqlite3_filename_journal sqlite3_api->filename_journal
#define sqlite3_filename_wal sqlite3_api->filename_wal
/* Version 3.32.0 and later */
#define sqlite3_create_filename sqlite3_api->create_filename
#define sqlite3_free_filename sqlite3_api->free_filename
#define sqlite3_database_file_object sqlite3_api->database_file_object
#endif /* !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION) */
#if !defined(SQLITE_CORE) && !defined(SQLITE_OMIT_LOAD_EXTENSION)
/* This case when the file really is being compiled as a loadable
** extension */
# define SQLITE_EXTENSION_INIT1 const sqlite3_api_routines *sqlite3_api=0;
# define SQLITE_EXTENSION_INIT2(v) sqlite3_api=v;
# define SQLITE_EXTENSION_INIT3 \
extern const sqlite3_api_routines *sqlite3_api;
#else
/* This case when the file is being statically linked into the
** application */
# define SQLITE_EXTENSION_INIT1 /*no-op*/
# define SQLITE_EXTENSION_INIT2(v) (void)v; /* unused parameter */
# define SQLITE_EXTENSION_INIT3 /*no-op*/
#endif
#endif /* SQLITE3EXT_H */
#else // USE_LIBSQLITE3
// If users really want to link against the system sqlite3 we
// need to make this file a noop.
#endif

37
vendor/github.com/mattn/go-sqlite3/static_mock.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
// Copyright (C) 2019 Yasuhiro Matsumoto <mattn.jp@gmail.com>.
//
// Use of this source code is governed by an MIT-style
// license that can be found in the LICENSE file.
// +build !cgo
package sqlite3
import (
"database/sql"
"database/sql/driver"
"errors"
)
var errorMsg = errors.New("Binary was compiled with 'CGO_ENABLED=0', go-sqlite3 requires cgo to work. This is a stub")
func init() {
sql.Register("sqlite3", &SQLiteDriver{})
}
type (
SQLiteDriver struct {
Extensions []string
ConnectHook func(*SQLiteConn) error
}
SQLiteConn struct{}
)
func (SQLiteDriver) Open(s string) (driver.Conn, error) { return nil, errorMsg }
func (c *SQLiteConn) RegisterAggregator(string, interface{}, bool) error { return errorMsg }
func (c *SQLiteConn) RegisterAuthorizer(func(int, string, string, string) int) {}
func (c *SQLiteConn) RegisterCollation(string, func(string, string) int) error { return errorMsg }
func (c *SQLiteConn) RegisterCommitHook(func() int) {}
func (c *SQLiteConn) RegisterFunc(string, interface{}, bool) error { return errorMsg }
func (c *SQLiteConn) RegisterRollbackHook(func()) {}
func (c *SQLiteConn) RegisterUpdateHook(func(int, string, string, int64)) {}

21
vendor/gorm.io/driver/sqlite/License generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013-NOW Jinzhu <wosmvp@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

17
vendor/gorm.io/driver/sqlite/README.md generated vendored Normal file
View file

@ -0,0 +1,17 @@
# GORM Sqlite Driver
![CI](https://github.com/go-gorm/sqlite/workflows/CI/badge.svg)
## USAGE
```go
import (
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
// github.com/mattn/go-sqlite3
db, err := gorm.Open(sqlite.Open("gorm.db"), &gorm.Config{})
```
Checkout [https://gorm.io](https://gorm.io) for details.

7
vendor/gorm.io/driver/sqlite/errors.go generated vendored Normal file
View file

@ -0,0 +1,7 @@
package sqlite
import "errors"
var (
ErrConstraintsNotImplemented = errors.New("constraints not implemented on sqlite, consider using DisableForeignKeyConstraintWhenMigrating, more details https://github.com/go-gorm/gorm/wiki/GORM-V2-Release-Note-Draft#all-new-migrator")
)

8
vendor/gorm.io/driver/sqlite/go.mod generated vendored Normal file
View file

@ -0,0 +1,8 @@
module gorm.io/driver/sqlite
go 1.14
require (
github.com/mattn/go-sqlite3 v1.14.5
gorm.io/gorm v1.20.7
)

8
vendor/gorm.io/driver/sqlite/go.sum generated vendored Normal file
View file

@ -0,0 +1,8 @@
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.1 h1:g39TucaRWyV3dwDO++eEc6qf8TVIQ/Da48WmqjZ3i7E=
github.com/jinzhu/now v1.1.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/mattn/go-sqlite3 v1.14.5 h1:1IdxlwTNazvbKJQSxoJ5/9ECbEeaTTyeU7sEAZ5KKTQ=
github.com/mattn/go-sqlite3 v1.14.5/go.mod h1:WVKg1VTActs4Qso6iwGbiFih2UIHo0ENGwNd0Lj+XmI=
gorm.io/gorm v1.20.7 h1:rMS4CL3pNmYq1V5/X+nHHjh1Dx6dnf27+Cai5zabo+M=
gorm.io/gorm v1.20.7/go.mod h1:0HFTzE/SqkGTzK6TlDPPQbAYCluiVvhzoA1+aVyzenw=

285
vendor/gorm.io/driver/sqlite/migrator.go generated vendored Normal file
View file

@ -0,0 +1,285 @@
package sqlite
import (
"fmt"
"regexp"
"strings"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"gorm.io/gorm/migrator"
"gorm.io/gorm/schema"
)
type Migrator struct {
migrator.Migrator
}
func (m *Migrator) RunWithoutForeignKey(fc func() error) error {
var enabled int
m.DB.Raw("PRAGMA foreign_keys").Scan(&enabled)
if enabled == 1 {
m.DB.Exec("PRAGMA foreign_keys = OFF")
defer m.DB.Exec("PRAGMA foreign_keys = ON")
}
return fc()
}
func (m Migrator) HasTable(value interface{}) bool {
var count int
m.Migrator.RunWithValue(value, func(stmt *gorm.Statement) error {
return m.DB.Raw("SELECT count(*) FROM sqlite_master WHERE type='table' AND name=?", stmt.Table).Row().Scan(&count)
})
return count > 0
}
func (m Migrator) DropTable(values ...interface{}) error {
return m.RunWithoutForeignKey(func() error {
values = m.ReorderModels(values, false)
tx := m.DB.Session(&gorm.Session{})
for i := len(values) - 1; i >= 0; i-- {
if err := m.RunWithValue(values[i], func(stmt *gorm.Statement) error {
return tx.Exec("DROP TABLE IF EXISTS ?", clause.Table{Name: stmt.Table}).Error
}); err != nil {
return err
}
}
return nil
})
return nil
}
func (m Migrator) HasColumn(value interface{}, name string) bool {
var count int
m.Migrator.RunWithValue(value, func(stmt *gorm.Statement) error {
if field := stmt.Schema.LookUpField(name); field != nil {
name = field.DBName
}
if name != "" {
m.DB.Raw(
"SELECT count(*) FROM sqlite_master WHERE type = ? AND tbl_name = ? AND (sql LIKE ? OR sql LIKE ? OR sql LIKE ?)",
"table", stmt.Table, `%"`+name+`" %`, `%`+name+` %`, "%`"+name+"`%",
).Row().Scan(&count)
}
return nil
})
return count > 0
}
func (m Migrator) AlterColumn(value interface{}, name string) error {
return m.RunWithoutForeignKey(func() error {
return m.RunWithValue(value, func(stmt *gorm.Statement) error {
if field := stmt.Schema.LookUpField(name); field != nil {
var (
createSQL string
newTableName = stmt.Table + "__temp"
)
m.DB.Raw("SELECT sql FROM sqlite_master WHERE type = ? AND tbl_name = ? AND name = ?", "table", stmt.Table, stmt.Table).Row().Scan(&createSQL)
if reg, err := regexp.Compile("(`|'|\"| )" + field.DBName + "(`|'|\"| ) .*?,"); err == nil {
tableReg, err := regexp.Compile(" ('|`|\"| )" + stmt.Table + "('|`|\"| ) ")
if err != nil {
return err
}
createSQL = tableReg.ReplaceAllString(createSQL, fmt.Sprintf(" `%v` ", newTableName))
createSQL = reg.ReplaceAllString(createSQL, fmt.Sprintf("`%v` ?,", field.DBName))
var columns []string
columnTypes, _ := m.DB.Migrator().ColumnTypes(value)
for _, columnType := range columnTypes {
columns = append(columns, fmt.Sprintf("`%v`", columnType.Name()))
}
return m.DB.Transaction(func(tx *gorm.DB) error {
queries := []string{
createSQL,
fmt.Sprintf("INSERT INTO `%v`(%v) SELECT %v FROM `%v`", newTableName, strings.Join(columns, ","), strings.Join(columns, ","), stmt.Table),
fmt.Sprintf("DROP TABLE `%v`", stmt.Table),
fmt.Sprintf("ALTER TABLE `%v` RENAME TO `%v`", newTableName, stmt.Table),
}
for _, query := range queries {
if err := tx.Exec(query, m.FullDataTypeOf(field)).Error; err != nil {
return err
}
}
return nil
})
} else {
return err
}
} else {
return fmt.Errorf("failed to alter field with name %v", name)
}
})
})
}
func (m Migrator) DropColumn(value interface{}, name string) error {
return m.RunWithValue(value, func(stmt *gorm.Statement) error {
if field := stmt.Schema.LookUpField(name); field != nil {
name = field.DBName
}
var (
createSQL string
newTableName = stmt.Table + "__temp"
)
m.DB.Raw("SELECT sql FROM sqlite_master WHERE type = ? AND tbl_name = ? AND name = ?", "table", stmt.Table, stmt.Table).Row().Scan(&createSQL)
if reg, err := regexp.Compile("(`|'|\"| )" + name + "(`|'|\"| ) .*?,"); err == nil {
tableReg, err := regexp.Compile(" ('|`|\"| )" + stmt.Table + "('|`|\"| ) ")
if err != nil {
return err
}
createSQL = tableReg.ReplaceAllString(createSQL, fmt.Sprintf(" `%v` ", newTableName))
createSQL = reg.ReplaceAllString(createSQL, "")
var columns []string
columnTypes, _ := m.DB.Migrator().ColumnTypes(value)
for _, columnType := range columnTypes {
if columnType.Name() != name {
columns = append(columns, fmt.Sprintf("`%v`", columnType.Name()))
}
}
return m.DB.Transaction(func(tx *gorm.DB) error {
queries := []string{
createSQL,
fmt.Sprintf("INSERT INTO `%v`(%v) SELECT %v FROM `%v`", newTableName, strings.Join(columns, ","), strings.Join(columns, ","), stmt.Table),
fmt.Sprintf("DROP TABLE `%v`", stmt.Table),
fmt.Sprintf("ALTER TABLE `%v` RENAME TO `%v`", newTableName, stmt.Table),
}
for _, query := range queries {
if err := tx.Exec(query).Error; err != nil {
return err
}
}
return nil
})
} else {
return err
}
})
}
func (m Migrator) CreateConstraint(interface{}, string) error {
return ErrConstraintsNotImplemented
}
func (m Migrator) DropConstraint(interface{}, string) error {
return ErrConstraintsNotImplemented
}
func (m Migrator) HasConstraint(value interface{}, name string) bool {
var count int64
m.RunWithValue(value, func(stmt *gorm.Statement) error {
m.DB.Raw(
"SELECT count(*) FROM sqlite_master WHERE type = ? AND tbl_name = ? AND (sql LIKE ? OR sql LIKE ? OR sql LIKE ?)",
"table", stmt.Table, `%CONSTRAINT "`+name+`" %`, `%CONSTRAINT `+name+` %`, "%CONSTRAINT `"+name+"`%",
).Row().Scan(&count)
return nil
})
return count > 0
}
func (m Migrator) CurrentDatabase() (name string) {
var null interface{}
m.DB.Raw("PRAGMA database_list").Row().Scan(&null, &name, &null)
return
}
func (m Migrator) BuildIndexOptions(opts []schema.IndexOption, stmt *gorm.Statement) (results []interface{}) {
for _, opt := range opts {
str := stmt.Quote(opt.DBName)
if opt.Expression != "" {
str = opt.Expression
}
if opt.Collate != "" {
str += " COLLATE " + opt.Collate
}
if opt.Sort != "" {
str += " " + opt.Sort
}
results = append(results, clause.Expr{SQL: str})
}
return
}
func (m Migrator) CreateIndex(value interface{}, name string) error {
return m.RunWithValue(value, func(stmt *gorm.Statement) error {
if idx := stmt.Schema.LookIndex(name); idx != nil {
opts := m.BuildIndexOptions(idx.Fields, stmt)
values := []interface{}{clause.Column{Name: idx.Name}, clause.Table{Name: stmt.Table}, opts}
createIndexSQL := "CREATE "
if idx.Class != "" {
createIndexSQL += idx.Class + " "
}
createIndexSQL += "INDEX ?"
if idx.Type != "" {
createIndexSQL += " USING " + idx.Type
}
createIndexSQL += " ON ??"
if idx.Where != "" {
createIndexSQL += " WHERE " + idx.Where
}
return m.DB.Exec(createIndexSQL, values...).Error
}
return fmt.Errorf("failed to create index with name %v", name)
})
}
func (m Migrator) HasIndex(value interface{}, name string) bool {
var count int
m.RunWithValue(value, func(stmt *gorm.Statement) error {
if idx := stmt.Schema.LookIndex(name); idx != nil {
name = idx.Name
}
if name != "" {
m.DB.Raw(
"SELECT count(*) FROM sqlite_master WHERE type = ? AND tbl_name = ? AND name = ?", "index", stmt.Table, name,
).Row().Scan(&count)
}
return nil
})
return count > 0
}
func (m Migrator) RenameIndex(value interface{}, oldName, newName string) error {
return m.RunWithValue(value, func(stmt *gorm.Statement) error {
var sql string
m.DB.Raw("SELECT sql FROM sqlite_master WHERE type = ? AND tbl_name = ? AND name = ?", "index", stmt.Table, oldName).Row().Scan(&sql)
if sql != "" {
return m.DB.Exec(strings.Replace(sql, oldName, newName, 1)).Error
}
return fmt.Errorf("failed to find index with name %v", oldName)
})
}
func (m Migrator) DropIndex(value interface{}, name string) error {
return m.RunWithValue(value, func(stmt *gorm.Statement) error {
if idx := stmt.Schema.LookIndex(name); idx != nil {
name = idx.Name
}
return m.DB.Exec("DROP INDEX ?", clause.Column{Name: name}).Error
})
}

180
vendor/gorm.io/driver/sqlite/sqlite.go generated vendored Normal file
View file

@ -0,0 +1,180 @@
package sqlite
import (
"database/sql"
"strconv"
"strings"
_ "github.com/mattn/go-sqlite3"
"gorm.io/gorm"
"gorm.io/gorm/callbacks"
"gorm.io/gorm/clause"
"gorm.io/gorm/logger"
"gorm.io/gorm/migrator"
"gorm.io/gorm/schema"
)
// DriverName is the default driver name for SQLite.
const DriverName = "sqlite3"
type Dialector struct {
DriverName string
DSN string
Conn gorm.ConnPool
}
func Open(dsn string) gorm.Dialector {
return &Dialector{DSN: dsn}
}
func (dialector Dialector) Name() string {
return "sqlite"
}
func (dialector Dialector) Initialize(db *gorm.DB) (err error) {
if dialector.DriverName == "" {
dialector.DriverName = DriverName
}
// register callbacks
callbacks.RegisterDefaultCallbacks(db, &callbacks.Config{
LastInsertIDReversed: true,
})
if dialector.Conn != nil {
db.ConnPool = dialector.Conn
} else {
db.ConnPool, err = sql.Open(dialector.DriverName, dialector.DSN)
if err != nil {
return err
}
}
for k, v := range dialector.ClauseBuilders() {
db.ClauseBuilders[k] = v
}
return
}
func (dialector Dialector) ClauseBuilders() map[string]clause.ClauseBuilder {
return map[string]clause.ClauseBuilder{
"INSERT": func(c clause.Clause, builder clause.Builder) {
if insert, ok := c.Expression.(clause.Insert); ok {
if stmt, ok := builder.(*gorm.Statement); ok {
stmt.WriteString("INSERT ")
if insert.Modifier != "" {
stmt.WriteString(insert.Modifier)
stmt.WriteByte(' ')
}
stmt.WriteString("INTO ")
if insert.Table.Name == "" {
stmt.WriteQuoted(stmt.Table)
} else {
stmt.WriteQuoted(insert.Table)
}
return
}
}
c.Build(builder)
},
"LIMIT": func(c clause.Clause, builder clause.Builder) {
if limit, ok := c.Expression.(clause.Limit); ok {
if limit.Limit > 0 {
builder.WriteString("LIMIT ")
builder.WriteString(strconv.Itoa(limit.Limit))
}
if limit.Offset > 0 {
if limit.Limit > 0 {
builder.WriteString(" ")
}
builder.WriteString("OFFSET ")
builder.WriteString(strconv.Itoa(limit.Offset))
}
}
},
"FOR": func(c clause.Clause, builder clause.Builder) {
if _, ok := c.Expression.(clause.Locking); ok {
// SQLite3 does not support row-level locking.
return
}
c.Build(builder)
},
}
}
func (dialector Dialector) DefaultValueOf(field *schema.Field) clause.Expression {
if field.AutoIncrement {
return clause.Expr{SQL: "NULL"}
}
// doesn't work, will raise error
return clause.Expr{SQL: "DEFAULT"}
}
func (dialector Dialector) Migrator(db *gorm.DB) gorm.Migrator {
return Migrator{migrator.Migrator{Config: migrator.Config{
DB: db,
Dialector: dialector,
CreateIndexAfterCreateTable: true,
}}}
}
func (dialector Dialector) BindVarTo(writer clause.Writer, stmt *gorm.Statement, v interface{}) {
writer.WriteByte('?')
}
func (dialector Dialector) QuoteTo(writer clause.Writer, str string) {
writer.WriteByte('`')
if strings.Contains(str, ".") {
for idx, str := range strings.Split(str, ".") {
if idx > 0 {
writer.WriteString(".`")
}
writer.WriteString(str)
writer.WriteByte('`')
}
} else {
writer.WriteString(str)
writer.WriteByte('`')
}
}
func (dialector Dialector) Explain(sql string, vars ...interface{}) string {
return logger.ExplainSQL(sql, nil, `"`, vars...)
}
func (dialector Dialector) DataTypeOf(field *schema.Field) string {
switch field.DataType {
case schema.Bool:
return "numeric"
case schema.Int, schema.Uint:
if field.AutoIncrement && !field.PrimaryKey {
// https://www.sqlite.org/autoinc.html
return "integer PRIMARY KEY AUTOINCREMENT"
} else {
return "integer"
}
case schema.Float:
return "real"
case schema.String:
return "text"
case schema.Time:
return "datetime"
case schema.Bytes:
return "blob"
}
return string(field.DataType)
}
func (dialectopr Dialector) SavePoint(tx *gorm.DB, name string) error {
tx.Exec("SAVEPOINT " + name)
return nil
}
func (dialectopr Dialector) RollbackTo(tx *gorm.DB, name string) error {
tx.Exec("ROLLBACK TO SAVEPOINT " + name)
return nil
}

5
vendor/gorm.io/gorm/.gitignore generated vendored Normal file
View file

@ -0,0 +1,5 @@
TODO*
documents
coverage.txt
_book
.idea

21
vendor/gorm.io/gorm/License generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2013-NOW Jinzhu <wosmvp@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

42
vendor/gorm.io/gorm/README.md generated vendored Normal file
View file

@ -0,0 +1,42 @@
# GORM
The fantastic ORM library for Golang, aims to be developer friendly.
[![go report card](https://goreportcard.com/badge/github.com/go-gorm/gorm "go report card")](https://goreportcard.com/report/github.com/go-gorm/gorm)
[![test status](https://github.com/go-gorm/gorm/workflows/tests/badge.svg?branch=master "test status")](https://github.com/go-gorm/gorm/actions)
[![Join the chat at https://gitter.im/jinzhu/gorm](https://img.shields.io/gitter/room/jinzhu/gorm.svg)](https://gitter.im/jinzhu/gorm?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[![Open Collective Backer](https://opencollective.com/gorm/tiers/backer/badge.svg?label=backer&color=brightgreen "Open Collective Backer")](https://opencollective.com/gorm)
[![Open Collective Sponsor](https://opencollective.com/gorm/tiers/sponsor/badge.svg?label=sponsor&color=brightgreen "Open Collective Sponsor")](https://opencollective.com/gorm)
[![MIT license](https://img.shields.io/badge/license-MIT-brightgreen.svg)](https://opensource.org/licenses/MIT)
[![Go.Dev reference](https://img.shields.io/badge/go.dev-reference-blue?logo=go&logoColor=white)](https://pkg.go.dev/gorm.io/gorm?tab=doc)
## Overview
* Full-Featured ORM
* Associations (Has One, Has Many, Belongs To, Many To Many, Polymorphism, Single-table inheritance)
* Hooks (Before/After Create/Save/Update/Delete/Find)
* Eager loading with `Preload`, `Joins`
* Transactions, Nested Transactions, Save Point, RollbackTo to Saved Point
* Context, Prepared Statement Mode, DryRun Mode
* Batch Insert, FindInBatches, Find To Map
* SQL Builder, Upsert, Locking, Optimizer/Index/Comment Hints, NamedArg, Search/Update/Create with SQL Expr
* Composite Primary Key
* Auto Migrations
* Logger
* Extendable, flexible plugin API: Database Resolver (Multiple Databases, Read/Write Splitting) / Prometheus…
* Every feature comes with tests
* Developer Friendly
## Getting Started
* GORM Guides [https://gorm.io](https://gorm.io)
## Contributing
[You can help to deliver a better GORM, check out things you can do](https://gorm.io/contribute.html)
## License
© Jinzhu, 2013~time.Now
Released under the [MIT License](https://github.com/go-gorm/gorm/blob/master/License)

513
vendor/gorm.io/gorm/association.go generated vendored Normal file
View file

@ -0,0 +1,513 @@
package gorm
import (
"fmt"
"reflect"
"strings"
"gorm.io/gorm/clause"
"gorm.io/gorm/schema"
"gorm.io/gorm/utils"
)
// Association Mode contains some helper methods to handle relationship things easily.
type Association struct {
DB *DB
Relationship *schema.Relationship
Error error
}
func (db *DB) Association(column string) *Association {
association := &Association{DB: db}
table := db.Statement.Table
if err := db.Statement.Parse(db.Statement.Model); err == nil {
db.Statement.Table = table
association.Relationship = db.Statement.Schema.Relationships.Relations[column]
if association.Relationship == nil {
association.Error = fmt.Errorf("%w: %v", ErrUnsupportedRelation, column)
}
db.Statement.ReflectValue = reflect.ValueOf(db.Statement.Model)
for db.Statement.ReflectValue.Kind() == reflect.Ptr {
db.Statement.ReflectValue = db.Statement.ReflectValue.Elem()
}
} else {
association.Error = err
}
return association
}
func (association *Association) Find(out interface{}, conds ...interface{}) error {
if association.Error == nil {
association.Error = association.buildCondition().Find(out, conds...).Error
}
return association.Error
}
func (association *Association) Append(values ...interface{}) error {
if association.Error == nil {
switch association.Relationship.Type {
case schema.HasOne, schema.BelongsTo:
if len(values) > 0 {
association.Error = association.Replace(values...)
}
default:
association.saveAssociation( /*clear*/ false, values...)
}
}
return association.Error
}
func (association *Association) Replace(values ...interface{}) error {
if association.Error == nil {
// save associations
if association.saveAssociation( /*clear*/ true, values...); association.Error != nil {
return association.Error
}
// set old associations's foreign key to null
reflectValue := association.DB.Statement.ReflectValue
rel := association.Relationship
switch rel.Type {
case schema.BelongsTo:
if len(values) == 0 {
updateMap := map[string]interface{}{}
switch reflectValue.Kind() {
case reflect.Slice, reflect.Array:
for i := 0; i < reflectValue.Len(); i++ {
association.Error = rel.Field.Set(reflectValue.Index(i), reflect.Zero(rel.Field.FieldType).Interface())
}
case reflect.Struct:
association.Error = rel.Field.Set(reflectValue, reflect.Zero(rel.Field.FieldType).Interface())
}
for _, ref := range rel.References {
updateMap[ref.ForeignKey.DBName] = nil
}
association.Error = association.DB.UpdateColumns(updateMap).Error
}
case schema.HasOne, schema.HasMany:
var (
primaryFields []*schema.Field
foreignKeys []string
updateMap = map[string]interface{}{}
relValues = schema.GetRelationsValues(reflectValue, []*schema.Relationship{rel})
modelValue = reflect.New(rel.FieldSchema.ModelType).Interface()
tx = association.DB.Model(modelValue)
)
if _, rvs := schema.GetIdentityFieldValuesMap(relValues, rel.FieldSchema.PrimaryFields); len(rvs) > 0 {
if column, values := schema.ToQueryValues(rel.FieldSchema.Table, rel.FieldSchema.PrimaryFieldDBNames, rvs); len(values) > 0 {
tx.Not(clause.IN{Column: column, Values: values})
}
}
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
primaryFields = append(primaryFields, ref.PrimaryKey)
foreignKeys = append(foreignKeys, ref.ForeignKey.DBName)
updateMap[ref.ForeignKey.DBName] = nil
} else if ref.PrimaryValue != "" {
tx.Where(clause.Eq{Column: ref.ForeignKey.DBName, Value: ref.PrimaryValue})
}
}
if _, pvs := schema.GetIdentityFieldValuesMap(reflectValue, primaryFields); len(pvs) > 0 {
column, values := schema.ToQueryValues(rel.FieldSchema.Table, foreignKeys, pvs)
association.Error = tx.Where(clause.IN{Column: column, Values: values}).UpdateColumns(updateMap).Error
}
case schema.Many2Many:
var (
primaryFields, relPrimaryFields []*schema.Field
joinPrimaryKeys, joinRelPrimaryKeys []string
modelValue = reflect.New(rel.JoinTable.ModelType).Interface()
tx = association.DB.Model(modelValue)
)
for _, ref := range rel.References {
if ref.PrimaryValue == "" {
if ref.OwnPrimaryKey {
primaryFields = append(primaryFields, ref.PrimaryKey)
joinPrimaryKeys = append(joinPrimaryKeys, ref.ForeignKey.DBName)
} else {
relPrimaryFields = append(relPrimaryFields, ref.PrimaryKey)
joinRelPrimaryKeys = append(joinRelPrimaryKeys, ref.ForeignKey.DBName)
}
} else {
tx.Clauses(clause.Eq{Column: ref.ForeignKey.DBName, Value: ref.PrimaryValue})
}
}
_, pvs := schema.GetIdentityFieldValuesMap(reflectValue, primaryFields)
if column, values := schema.ToQueryValues(rel.JoinTable.Table, joinPrimaryKeys, pvs); len(values) > 0 {
tx.Where(clause.IN{Column: column, Values: values})
} else {
return ErrPrimaryKeyRequired
}
_, rvs := schema.GetIdentityFieldValuesMapFromValues(values, relPrimaryFields)
if relColumn, relValues := schema.ToQueryValues(rel.JoinTable.Table, joinRelPrimaryKeys, rvs); len(relValues) > 0 {
tx.Where(clause.Not(clause.IN{Column: relColumn, Values: relValues}))
}
association.Error = tx.Delete(modelValue).Error
}
}
return association.Error
}
func (association *Association) Delete(values ...interface{}) error {
if association.Error == nil {
var (
reflectValue = association.DB.Statement.ReflectValue
rel = association.Relationship
primaryFields []*schema.Field
foreignKeys []string
updateAttrs = map[string]interface{}{}
conds []clause.Expression
)
for _, ref := range rel.References {
if ref.PrimaryValue == "" {
primaryFields = append(primaryFields, ref.PrimaryKey)
foreignKeys = append(foreignKeys, ref.ForeignKey.DBName)
updateAttrs[ref.ForeignKey.DBName] = nil
} else {
conds = append(conds, clause.Eq{Column: ref.ForeignKey.DBName, Value: ref.PrimaryValue})
}
}
switch rel.Type {
case schema.BelongsTo:
tx := association.DB.Model(reflect.New(rel.Schema.ModelType).Interface())
_, pvs := schema.GetIdentityFieldValuesMap(reflectValue, rel.Schema.PrimaryFields)
pcolumn, pvalues := schema.ToQueryValues(rel.Schema.Table, rel.Schema.PrimaryFieldDBNames, pvs)
conds = append(conds, clause.IN{Column: pcolumn, Values: pvalues})
_, rvs := schema.GetIdentityFieldValuesMapFromValues(values, primaryFields)
relColumn, relValues := schema.ToQueryValues(rel.Schema.Table, foreignKeys, rvs)
conds = append(conds, clause.IN{Column: relColumn, Values: relValues})
association.Error = tx.Clauses(conds...).UpdateColumns(updateAttrs).Error
case schema.HasOne, schema.HasMany:
tx := association.DB.Model(reflect.New(rel.FieldSchema.ModelType).Interface())
_, pvs := schema.GetIdentityFieldValuesMap(reflectValue, primaryFields)
pcolumn, pvalues := schema.ToQueryValues(rel.FieldSchema.Table, foreignKeys, pvs)
conds = append(conds, clause.IN{Column: pcolumn, Values: pvalues})
_, rvs := schema.GetIdentityFieldValuesMapFromValues(values, rel.FieldSchema.PrimaryFields)
relColumn, relValues := schema.ToQueryValues(rel.FieldSchema.Table, rel.FieldSchema.PrimaryFieldDBNames, rvs)
conds = append(conds, clause.IN{Column: relColumn, Values: relValues})
association.Error = tx.Clauses(conds...).UpdateColumns(updateAttrs).Error
case schema.Many2Many:
var (
primaryFields, relPrimaryFields []*schema.Field
joinPrimaryKeys, joinRelPrimaryKeys []string
joinValue = reflect.New(rel.JoinTable.ModelType).Interface()
)
for _, ref := range rel.References {
if ref.PrimaryValue == "" {
if ref.OwnPrimaryKey {
primaryFields = append(primaryFields, ref.PrimaryKey)
joinPrimaryKeys = append(joinPrimaryKeys, ref.ForeignKey.DBName)
} else {
relPrimaryFields = append(relPrimaryFields, ref.PrimaryKey)
joinRelPrimaryKeys = append(joinRelPrimaryKeys, ref.ForeignKey.DBName)
}
} else {
conds = append(conds, clause.Eq{Column: ref.ForeignKey.DBName, Value: ref.PrimaryValue})
}
}
_, pvs := schema.GetIdentityFieldValuesMap(reflectValue, primaryFields)
pcolumn, pvalues := schema.ToQueryValues(rel.JoinTable.Table, joinPrimaryKeys, pvs)
conds = append(conds, clause.IN{Column: pcolumn, Values: pvalues})
_, rvs := schema.GetIdentityFieldValuesMapFromValues(values, relPrimaryFields)
relColumn, relValues := schema.ToQueryValues(rel.JoinTable.Table, joinRelPrimaryKeys, rvs)
conds = append(conds, clause.IN{Column: relColumn, Values: relValues})
association.Error = association.DB.Where(clause.Where{Exprs: conds}).Model(nil).Delete(joinValue).Error
}
if association.Error == nil {
// clean up deleted values's foreign key
relValuesMap, _ := schema.GetIdentityFieldValuesMapFromValues(values, rel.FieldSchema.PrimaryFields)
cleanUpDeletedRelations := func(data reflect.Value) {
if _, zero := rel.Field.ValueOf(data); !zero {
fieldValue := reflect.Indirect(rel.Field.ReflectValueOf(data))
primaryValues := make([]interface{}, len(rel.FieldSchema.PrimaryFields))
switch fieldValue.Kind() {
case reflect.Slice, reflect.Array:
validFieldValues := reflect.Zero(rel.Field.IndirectFieldType)
for i := 0; i < fieldValue.Len(); i++ {
for idx, field := range rel.FieldSchema.PrimaryFields {
primaryValues[idx], _ = field.ValueOf(fieldValue.Index(i))
}
if _, ok := relValuesMap[utils.ToStringKey(primaryValues...)]; !ok {
validFieldValues = reflect.Append(validFieldValues, fieldValue.Index(i))
}
}
association.Error = rel.Field.Set(data, validFieldValues.Interface())
case reflect.Struct:
for idx, field := range rel.FieldSchema.PrimaryFields {
primaryValues[idx], _ = field.ValueOf(fieldValue)
}
if _, ok := relValuesMap[utils.ToStringKey(primaryValues...)]; ok {
if association.Error = rel.Field.Set(data, reflect.Zero(rel.FieldSchema.ModelType).Interface()); association.Error != nil {
break
}
if rel.JoinTable == nil {
for _, ref := range rel.References {
if ref.OwnPrimaryKey || ref.PrimaryValue != "" {
association.Error = ref.ForeignKey.Set(fieldValue, reflect.Zero(ref.ForeignKey.FieldType).Interface())
} else {
association.Error = ref.ForeignKey.Set(data, reflect.Zero(ref.ForeignKey.FieldType).Interface())
}
}
}
}
}
}
}
switch reflectValue.Kind() {
case reflect.Slice, reflect.Array:
for i := 0; i < reflectValue.Len(); i++ {
cleanUpDeletedRelations(reflect.Indirect(reflectValue.Index(i)))
}
case reflect.Struct:
cleanUpDeletedRelations(reflectValue)
}
}
}
return association.Error
}
func (association *Association) Clear() error {
return association.Replace()
}
func (association *Association) Count() (count int64) {
if association.Error == nil {
association.Error = association.buildCondition().Count(&count).Error
}
return
}
type assignBack struct {
Source reflect.Value
Index int
Dest reflect.Value
}
func (association *Association) saveAssociation(clear bool, values ...interface{}) {
var (
reflectValue = association.DB.Statement.ReflectValue
assignBacks []assignBack // assign association values back to arguments after save
)
appendToRelations := func(source, rv reflect.Value, clear bool) {
switch association.Relationship.Type {
case schema.HasOne, schema.BelongsTo:
switch rv.Kind() {
case reflect.Slice, reflect.Array:
if rv.Len() > 0 {
association.Error = association.Relationship.Field.Set(source, rv.Index(0).Addr().Interface())
if association.Relationship.Field.FieldType.Kind() == reflect.Struct {
assignBacks = append(assignBacks, assignBack{Source: source, Dest: rv.Index(0)})
}
}
case reflect.Struct:
association.Error = association.Relationship.Field.Set(source, rv.Addr().Interface())
if association.Relationship.Field.FieldType.Kind() == reflect.Struct {
assignBacks = append(assignBacks, assignBack{Source: source, Dest: rv})
}
}
case schema.HasMany, schema.Many2Many:
elemType := association.Relationship.Field.IndirectFieldType.Elem()
fieldValue := reflect.Indirect(association.Relationship.Field.ReflectValueOf(source))
if clear {
fieldValue = reflect.New(association.Relationship.Field.IndirectFieldType).Elem()
}
appendToFieldValues := func(ev reflect.Value) {
if ev.Type().AssignableTo(elemType) {
fieldValue = reflect.Append(fieldValue, ev)
} else if ev.Type().Elem().AssignableTo(elemType) {
fieldValue = reflect.Append(fieldValue, ev.Elem())
} else {
association.Error = fmt.Errorf("unsupported data type: %v for relation %v", ev.Type(), association.Relationship.Name)
}
if elemType.Kind() == reflect.Struct {
assignBacks = append(assignBacks, assignBack{Source: source, Dest: ev, Index: fieldValue.Len()})
}
}
switch rv.Kind() {
case reflect.Slice, reflect.Array:
for i := 0; i < rv.Len(); i++ {
appendToFieldValues(reflect.Indirect(rv.Index(i)).Addr())
}
case reflect.Struct:
appendToFieldValues(rv.Addr())
}
if association.Error == nil {
association.Error = association.Relationship.Field.Set(source, fieldValue.Interface())
}
}
}
selectedSaveColumns := []string{association.Relationship.Name}
omitColumns := []string{}
selectColumns, _ := association.DB.Statement.SelectAndOmitColumns(true, false)
for name, ok := range selectColumns {
columnName := ""
if strings.HasPrefix(name, association.Relationship.Name) {
if columnName = strings.TrimPrefix(name, association.Relationship.Name); columnName == ".*" {
columnName = name
}
} else if strings.HasPrefix(name, clause.Associations) {
columnName = name
}
if columnName != "" {
if ok {
selectedSaveColumns = append(selectedSaveColumns, columnName)
} else {
omitColumns = append(omitColumns, columnName)
}
}
}
for _, ref := range association.Relationship.References {
if !ref.OwnPrimaryKey {
selectedSaveColumns = append(selectedSaveColumns, ref.ForeignKey.Name)
}
}
associationDB := association.DB.Session(&Session{}).Model(nil)
if !association.DB.FullSaveAssociations {
associationDB.Select(selectedSaveColumns)
}
if len(omitColumns) > 0 {
associationDB.Omit(omitColumns...)
}
associationDB = associationDB.Session(&Session{})
switch reflectValue.Kind() {
case reflect.Slice, reflect.Array:
if len(values) != reflectValue.Len() {
// clear old data
if clear && len(values) == 0 {
for i := 0; i < reflectValue.Len(); i++ {
if err := association.Relationship.Field.Set(reflectValue.Index(i), reflect.New(association.Relationship.Field.IndirectFieldType).Interface()); err != nil {
association.Error = err
break
}
if association.Relationship.JoinTable == nil {
for _, ref := range association.Relationship.References {
if !ref.OwnPrimaryKey && ref.PrimaryValue == "" {
if err := ref.ForeignKey.Set(reflectValue.Index(i), reflect.Zero(ref.ForeignKey.FieldType).Interface()); err != nil {
association.Error = err
break
}
}
}
}
}
break
}
association.Error = ErrInvalidValueOfLength
return
}
for i := 0; i < reflectValue.Len(); i++ {
appendToRelations(reflectValue.Index(i), reflect.Indirect(reflect.ValueOf(values[i])), clear)
// TODO support save slice data, sql with case?
association.Error = associationDB.Updates(reflectValue.Index(i).Addr().Interface()).Error
}
case reflect.Struct:
// clear old data
if clear && len(values) == 0 {
association.Error = association.Relationship.Field.Set(reflectValue, reflect.New(association.Relationship.Field.IndirectFieldType).Interface())
if association.Relationship.JoinTable == nil && association.Error == nil {
for _, ref := range association.Relationship.References {
if !ref.OwnPrimaryKey && ref.PrimaryValue == "" {
association.Error = ref.ForeignKey.Set(reflectValue, reflect.Zero(ref.ForeignKey.FieldType).Interface())
}
}
}
}
for idx, value := range values {
rv := reflect.Indirect(reflect.ValueOf(value))
appendToRelations(reflectValue, rv, clear && idx == 0)
}
if len(values) > 0 {
association.Error = associationDB.Updates(reflectValue.Addr().Interface()).Error
}
}
for _, assignBack := range assignBacks {
fieldValue := reflect.Indirect(association.Relationship.Field.ReflectValueOf(assignBack.Source))
if assignBack.Index > 0 {
reflect.Indirect(assignBack.Dest).Set(fieldValue.Index(assignBack.Index - 1))
} else {
reflect.Indirect(assignBack.Dest).Set(fieldValue)
}
}
}
func (association *Association) buildCondition() *DB {
var (
queryConds = association.Relationship.ToQueryConditions(association.DB.Statement.ReflectValue)
modelValue = reflect.New(association.Relationship.FieldSchema.ModelType).Interface()
tx = association.DB.Model(modelValue)
)
if association.Relationship.JoinTable != nil {
if !tx.Statement.Unscoped && len(association.Relationship.JoinTable.QueryClauses) > 0 {
joinStmt := Statement{DB: tx, Schema: association.Relationship.JoinTable, Table: association.Relationship.JoinTable.Table, Clauses: map[string]clause.Clause{}}
for _, queryClause := range association.Relationship.JoinTable.QueryClauses {
joinStmt.AddClause(queryClause)
}
joinStmt.Build("WHERE")
tx.Clauses(clause.Expr{SQL: strings.Replace(joinStmt.SQL.String(), "WHERE ", "", 1), Vars: joinStmt.Vars})
}
tx = tx.Session(&Session{QueryFields: true}).Clauses(clause.From{Joins: []clause.Join{{
Table: clause.Table{Name: association.Relationship.JoinTable.Table},
ON: clause.Where{Exprs: queryConds},
}}})
} else {
tx.Clauses(clause.Where{Exprs: queryConds})
}
return tx
}

327
vendor/gorm.io/gorm/callbacks.go generated vendored Normal file
View file

@ -0,0 +1,327 @@
package gorm
import (
"context"
"errors"
"fmt"
"reflect"
"sort"
"time"
"gorm.io/gorm/schema"
"gorm.io/gorm/utils"
)
func initializeCallbacks(db *DB) *callbacks {
return &callbacks{
processors: map[string]*processor{
"create": {db: db},
"query": {db: db},
"update": {db: db},
"delete": {db: db},
"row": {db: db},
"raw": {db: db},
},
}
}
// callbacks gorm callbacks manager
type callbacks struct {
processors map[string]*processor
}
type processor struct {
db *DB
Clauses []string
fns []func(*DB)
callbacks []*callback
}
type callback struct {
name string
before string
after string
remove bool
replace bool
match func(*DB) bool
handler func(*DB)
processor *processor
}
func (cs *callbacks) Create() *processor {
return cs.processors["create"]
}
func (cs *callbacks) Query() *processor {
return cs.processors["query"]
}
func (cs *callbacks) Update() *processor {
return cs.processors["update"]
}
func (cs *callbacks) Delete() *processor {
return cs.processors["delete"]
}
func (cs *callbacks) Row() *processor {
return cs.processors["row"]
}
func (cs *callbacks) Raw() *processor {
return cs.processors["raw"]
}
func (p *processor) Execute(db *DB) {
// call scopes
for len(db.Statement.scopes) > 0 {
scopes := db.Statement.scopes
db.Statement.scopes = nil
for _, scope := range scopes {
db = scope(db)
}
}
var (
curTime = time.Now()
stmt = db.Statement
resetBuildClauses bool
)
if len(stmt.BuildClauses) == 0 {
stmt.BuildClauses = p.Clauses
resetBuildClauses = true
}
// assign model values
if stmt.Model == nil {
stmt.Model = stmt.Dest
} else if stmt.Dest == nil {
stmt.Dest = stmt.Model
}
// parse model values
if stmt.Model != nil {
if err := stmt.Parse(stmt.Model); err != nil && (!errors.Is(err, schema.ErrUnsupportedDataType) || (stmt.Table == "" && stmt.SQL.Len() == 0)) {
if errors.Is(err, schema.ErrUnsupportedDataType) && stmt.Table == "" {
db.AddError(fmt.Errorf("%w: Table not set, please set it like: db.Model(&user) or db.Table(\"users\")", err))
} else {
db.AddError(err)
}
}
}
// assign stmt.ReflectValue
if stmt.Dest != nil {
stmt.ReflectValue = reflect.ValueOf(stmt.Dest)
for stmt.ReflectValue.Kind() == reflect.Ptr {
if stmt.ReflectValue.IsNil() && stmt.ReflectValue.CanAddr() {
stmt.ReflectValue.Set(reflect.New(stmt.ReflectValue.Type().Elem()))
}
stmt.ReflectValue = stmt.ReflectValue.Elem()
}
if !stmt.ReflectValue.IsValid() {
db.AddError(ErrInvalidValue)
}
}
for _, f := range p.fns {
f(db)
}
db.Logger.Trace(stmt.Context, curTime, func() (string, int64) {
return db.Dialector.Explain(stmt.SQL.String(), stmt.Vars...), db.RowsAffected
}, db.Error)
if !stmt.DB.DryRun {
stmt.SQL.Reset()
stmt.Vars = nil
}
if resetBuildClauses {
stmt.BuildClauses = nil
}
}
func (p *processor) Get(name string) func(*DB) {
for i := len(p.callbacks) - 1; i >= 0; i-- {
if v := p.callbacks[i]; v.name == name && !v.remove {
return v.handler
}
}
return nil
}
func (p *processor) Before(name string) *callback {
return &callback{before: name, processor: p}
}
func (p *processor) After(name string) *callback {
return &callback{after: name, processor: p}
}
func (p *processor) Match(fc func(*DB) bool) *callback {
return &callback{match: fc, processor: p}
}
func (p *processor) Register(name string, fn func(*DB)) error {
return (&callback{processor: p}).Register(name, fn)
}
func (p *processor) Remove(name string) error {
return (&callback{processor: p}).Remove(name)
}
func (p *processor) Replace(name string, fn func(*DB)) error {
return (&callback{processor: p}).Replace(name, fn)
}
func (p *processor) compile() (err error) {
var callbacks []*callback
for _, callback := range p.callbacks {
if callback.match == nil || callback.match(p.db) {
callbacks = append(callbacks, callback)
}
}
p.callbacks = callbacks
if p.fns, err = sortCallbacks(p.callbacks); err != nil {
p.db.Logger.Error(context.Background(), "Got error when compile callbacks, got %v", err)
}
return
}
func (c *callback) Before(name string) *callback {
c.before = name
return c
}
func (c *callback) After(name string) *callback {
c.after = name
return c
}
func (c *callback) Register(name string, fn func(*DB)) error {
c.name = name
c.handler = fn
c.processor.callbacks = append(c.processor.callbacks, c)
return c.processor.compile()
}
func (c *callback) Remove(name string) error {
c.processor.db.Logger.Warn(context.Background(), "removing callback `%v` from %v\n", name, utils.FileWithLineNum())
c.name = name
c.remove = true
c.processor.callbacks = append(c.processor.callbacks, c)
return c.processor.compile()
}
func (c *callback) Replace(name string, fn func(*DB)) error {
c.processor.db.Logger.Info(context.Background(), "replacing callback `%v` from %v\n", name, utils.FileWithLineNum())
c.name = name
c.handler = fn
c.replace = true
c.processor.callbacks = append(c.processor.callbacks, c)
return c.processor.compile()
}
// getRIndex get right index from string slice
func getRIndex(strs []string, str string) int {
for i := len(strs) - 1; i >= 0; i-- {
if strs[i] == str {
return i
}
}
return -1
}
func sortCallbacks(cs []*callback) (fns []func(*DB), err error) {
var (
names, sorted []string
sortCallback func(*callback) error
)
sort.Slice(cs, func(i, j int) bool {
return cs[j].before == "*" || cs[j].after == "*"
})
for _, c := range cs {
// show warning message the callback name already exists
if idx := getRIndex(names, c.name); idx > -1 && !c.replace && !c.remove && !cs[idx].remove {
c.processor.db.Logger.Warn(context.Background(), "duplicated callback `%v` from %v\n", c.name, utils.FileWithLineNum())
}
names = append(names, c.name)
}
sortCallback = func(c *callback) error {
if c.before != "" { // if defined before callback
if c.before == "*" && len(sorted) > 0 {
if curIdx := getRIndex(sorted, c.name); curIdx == -1 {
sorted = append([]string{c.name}, sorted...)
}
} else if sortedIdx := getRIndex(sorted, c.before); sortedIdx != -1 {
if curIdx := getRIndex(sorted, c.name); curIdx == -1 {
// if before callback already sorted, append current callback just after it
sorted = append(sorted[:sortedIdx], append([]string{c.name}, sorted[sortedIdx:]...)...)
} else if curIdx > sortedIdx {
return fmt.Errorf("conflicting callback %v with before %v", c.name, c.before)
}
} else if idx := getRIndex(names, c.before); idx != -1 {
// if before callback exists
cs[idx].after = c.name
}
}
if c.after != "" { // if defined after callback
if c.after == "*" && len(sorted) > 0 {
if curIdx := getRIndex(sorted, c.name); curIdx == -1 {
sorted = append(sorted, c.name)
}
} else if sortedIdx := getRIndex(sorted, c.after); sortedIdx != -1 {
if curIdx := getRIndex(sorted, c.name); curIdx == -1 {
// if after callback sorted, append current callback to last
sorted = append(sorted, c.name)
} else if curIdx < sortedIdx {
return fmt.Errorf("conflicting callback %v with before %v", c.name, c.after)
}
} else if idx := getRIndex(names, c.after); idx != -1 {
// if after callback exists but haven't sorted
// set after callback's before callback to current callback
after := cs[idx]
if after.before == "" {
after.before = c.name
}
if err := sortCallback(after); err != nil {
return err
}
if err := sortCallback(c); err != nil {
return err
}
}
}
// if current callback haven't been sorted, append it to last
if getRIndex(sorted, c.name) == -1 {
sorted = append(sorted, c.name)
}
return nil
}
for _, c := range cs {
if err = sortCallback(c); err != nil {
return
}
}
for _, name := range sorted {
if idx := getRIndex(names, name); !cs[idx].remove {
fns = append(fns, cs[idx].handler)
}
}
return
}

390
vendor/gorm.io/gorm/callbacks/associations.go generated vendored Normal file
View file

@ -0,0 +1,390 @@
package callbacks
import (
"reflect"
"strings"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"gorm.io/gorm/schema"
)
func SaveBeforeAssociations(create bool) func(db *gorm.DB) {
return func(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil {
selectColumns, restricted := db.Statement.SelectAndOmitColumns(create, !create)
// Save Belongs To associations
for _, rel := range db.Statement.Schema.Relationships.BelongsTo {
if v, ok := selectColumns[rel.Name]; (ok && !v) || (!ok && restricted) {
continue
}
setupReferences := func(obj reflect.Value, elem reflect.Value) {
for _, ref := range rel.References {
if !ref.OwnPrimaryKey {
pv, _ := ref.PrimaryKey.ValueOf(elem)
db.AddError(ref.ForeignKey.Set(obj, pv))
if dest, ok := db.Statement.Dest.(map[string]interface{}); ok {
dest[ref.ForeignKey.DBName] = pv
if _, ok := dest[rel.Name]; ok {
dest[rel.Name] = elem.Interface()
}
}
}
}
}
switch db.Statement.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
var (
objs = make([]reflect.Value, 0, db.Statement.ReflectValue.Len())
fieldType = rel.Field.FieldType
isPtr = fieldType.Kind() == reflect.Ptr
)
if !isPtr {
fieldType = reflect.PtrTo(fieldType)
}
elems := reflect.MakeSlice(reflect.SliceOf(fieldType), 0, 10)
for i := 0; i < db.Statement.ReflectValue.Len(); i++ {
obj := db.Statement.ReflectValue.Index(i)
if reflect.Indirect(obj).Kind() == reflect.Struct {
if _, zero := rel.Field.ValueOf(obj); !zero { // check belongs to relation value
rv := rel.Field.ReflectValueOf(obj) // relation reflect value
objs = append(objs, obj)
if isPtr {
elems = reflect.Append(elems, rv)
} else {
elems = reflect.Append(elems, rv.Addr())
}
}
} else {
break
}
}
if elems.Len() > 0 {
if saveAssociations(db, rel, elems.Interface(), selectColumns, restricted, nil) == nil {
for i := 0; i < elems.Len(); i++ {
setupReferences(objs[i], elems.Index(i))
}
}
}
case reflect.Struct:
if _, zero := rel.Field.ValueOf(db.Statement.ReflectValue); !zero {
rv := rel.Field.ReflectValueOf(db.Statement.ReflectValue) // relation reflect value
if rv.Kind() != reflect.Ptr {
rv = rv.Addr()
}
if saveAssociations(db, rel, rv.Interface(), selectColumns, restricted, nil) == nil {
setupReferences(db.Statement.ReflectValue, rv)
}
}
}
}
}
}
}
func SaveAfterAssociations(create bool) func(db *gorm.DB) {
return func(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil {
selectColumns, restricted := db.Statement.SelectAndOmitColumns(create, !create)
// Save Has One associations
for _, rel := range db.Statement.Schema.Relationships.HasOne {
if v, ok := selectColumns[rel.Name]; (ok && !v) || (!ok && restricted) {
continue
}
switch db.Statement.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
var (
fieldType = rel.Field.FieldType
isPtr = fieldType.Kind() == reflect.Ptr
)
if !isPtr {
fieldType = reflect.PtrTo(fieldType)
}
elems := reflect.MakeSlice(reflect.SliceOf(fieldType), 0, 10)
for i := 0; i < db.Statement.ReflectValue.Len(); i++ {
obj := db.Statement.ReflectValue.Index(i)
if reflect.Indirect(obj).Kind() == reflect.Struct {
if _, zero := rel.Field.ValueOf(obj); !zero {
rv := rel.Field.ReflectValueOf(obj)
if rv.Kind() != reflect.Ptr {
rv = rv.Addr()
}
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
fv, _ := ref.PrimaryKey.ValueOf(obj)
db.AddError(ref.ForeignKey.Set(rv, fv))
} else if ref.PrimaryValue != "" {
db.AddError(ref.ForeignKey.Set(rv, ref.PrimaryValue))
}
}
elems = reflect.Append(elems, rv)
}
}
}
if elems.Len() > 0 {
assignmentColumns := make([]string, 0, len(rel.References))
for _, ref := range rel.References {
assignmentColumns = append(assignmentColumns, ref.ForeignKey.DBName)
}
saveAssociations(db, rel, elems.Interface(), selectColumns, restricted, assignmentColumns)
}
case reflect.Struct:
if _, zero := rel.Field.ValueOf(db.Statement.ReflectValue); !zero {
f := rel.Field.ReflectValueOf(db.Statement.ReflectValue)
if f.Kind() != reflect.Ptr {
f = f.Addr()
}
assignmentColumns := make([]string, 0, len(rel.References))
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
fv, _ := ref.PrimaryKey.ValueOf(db.Statement.ReflectValue)
ref.ForeignKey.Set(f, fv)
} else if ref.PrimaryValue != "" {
ref.ForeignKey.Set(f, ref.PrimaryValue)
}
assignmentColumns = append(assignmentColumns, ref.ForeignKey.DBName)
}
saveAssociations(db, rel, f.Interface(), selectColumns, restricted, assignmentColumns)
}
}
}
// Save Has Many associations
for _, rel := range db.Statement.Schema.Relationships.HasMany {
if v, ok := selectColumns[rel.Name]; (ok && !v) || (!ok && restricted) {
continue
}
fieldType := rel.Field.IndirectFieldType.Elem()
isPtr := fieldType.Kind() == reflect.Ptr
if !isPtr {
fieldType = reflect.PtrTo(fieldType)
}
elems := reflect.MakeSlice(reflect.SliceOf(fieldType), 0, 10)
appendToElems := func(v reflect.Value) {
if _, zero := rel.Field.ValueOf(v); !zero {
f := reflect.Indirect(rel.Field.ReflectValueOf(v))
for i := 0; i < f.Len(); i++ {
elem := f.Index(i)
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
pv, _ := ref.PrimaryKey.ValueOf(v)
ref.ForeignKey.Set(elem, pv)
} else if ref.PrimaryValue != "" {
ref.ForeignKey.Set(elem, ref.PrimaryValue)
}
}
if isPtr {
elems = reflect.Append(elems, elem)
} else {
elems = reflect.Append(elems, elem.Addr())
}
}
}
}
switch db.Statement.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
for i := 0; i < db.Statement.ReflectValue.Len(); i++ {
obj := db.Statement.ReflectValue.Index(i)
if reflect.Indirect(obj).Kind() == reflect.Struct {
appendToElems(obj)
}
}
case reflect.Struct:
appendToElems(db.Statement.ReflectValue)
}
if elems.Len() > 0 {
assignmentColumns := make([]string, 0, len(rel.References))
for _, ref := range rel.References {
assignmentColumns = append(assignmentColumns, ref.ForeignKey.DBName)
}
saveAssociations(db, rel, elems.Interface(), selectColumns, restricted, assignmentColumns)
}
}
// Save Many2Many associations
for _, rel := range db.Statement.Schema.Relationships.Many2Many {
if v, ok := selectColumns[rel.Name]; (ok && !v) || (!ok && restricted) {
continue
}
fieldType := rel.Field.IndirectFieldType.Elem()
isPtr := fieldType.Kind() == reflect.Ptr
if !isPtr {
fieldType = reflect.PtrTo(fieldType)
}
elems := reflect.MakeSlice(reflect.SliceOf(fieldType), 0, 10)
joins := reflect.MakeSlice(reflect.SliceOf(reflect.PtrTo(rel.JoinTable.ModelType)), 0, 10)
objs := []reflect.Value{}
appendToJoins := func(obj reflect.Value, elem reflect.Value) {
joinValue := reflect.New(rel.JoinTable.ModelType)
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
fv, _ := ref.PrimaryKey.ValueOf(obj)
ref.ForeignKey.Set(joinValue, fv)
} else if ref.PrimaryValue != "" {
ref.ForeignKey.Set(joinValue, ref.PrimaryValue)
} else {
fv, _ := ref.PrimaryKey.ValueOf(elem)
ref.ForeignKey.Set(joinValue, fv)
}
}
joins = reflect.Append(joins, joinValue)
}
appendToElems := func(v reflect.Value) {
if _, zero := rel.Field.ValueOf(v); !zero {
f := reflect.Indirect(rel.Field.ReflectValueOf(v))
for i := 0; i < f.Len(); i++ {
elem := f.Index(i)
objs = append(objs, v)
if isPtr {
elems = reflect.Append(elems, elem)
} else {
elems = reflect.Append(elems, elem.Addr())
}
}
}
}
switch db.Statement.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
for i := 0; i < db.Statement.ReflectValue.Len(); i++ {
obj := db.Statement.ReflectValue.Index(i)
if reflect.Indirect(obj).Kind() == reflect.Struct {
appendToElems(obj)
}
}
case reflect.Struct:
appendToElems(db.Statement.ReflectValue)
}
// optimize elems of reflect value length
if elemLen := elems.Len(); elemLen > 0 {
if v, ok := selectColumns[rel.Name+".*"]; !ok || v {
saveAssociations(db, rel, elems.Interface(), selectColumns, restricted, nil)
}
for i := 0; i < elemLen; i++ {
appendToJoins(objs[i], elems.Index(i))
}
}
if joins.Len() > 0 {
db.AddError(db.Session(&gorm.Session{NewDB: true}).Clauses(clause.OnConflict{DoNothing: true}).Session(&gorm.Session{
SkipHooks: db.Statement.SkipHooks,
DisableNestedTransaction: true,
}).Create(joins.Interface()).Error)
}
}
}
}
}
func onConflictOption(stmt *gorm.Statement, s *schema.Schema, selectColumns map[string]bool, restricted bool, defaultUpdatingColumns []string) clause.OnConflict {
if stmt.DB.FullSaveAssociations {
defaultUpdatingColumns = make([]string, 0, len(s.DBNames))
for _, dbName := range s.DBNames {
if v, ok := selectColumns[dbName]; (ok && !v) || (!ok && restricted) {
continue
}
if !s.LookUpField(dbName).PrimaryKey {
defaultUpdatingColumns = append(defaultUpdatingColumns, dbName)
}
}
}
if len(defaultUpdatingColumns) > 0 {
columns := make([]clause.Column, 0, len(s.PrimaryFieldDBNames))
for _, dbName := range s.PrimaryFieldDBNames {
columns = append(columns, clause.Column{Name: dbName})
}
return clause.OnConflict{
Columns: columns,
DoUpdates: clause.AssignmentColumns(defaultUpdatingColumns),
}
}
return clause.OnConflict{DoNothing: true}
}
func saveAssociations(db *gorm.DB, rel *schema.Relationship, values interface{}, selectColumns map[string]bool, restricted bool, defaultUpdatingColumns []string) error {
var (
selects, omits []string
onConflict = onConflictOption(db.Statement, rel.FieldSchema, selectColumns, restricted, defaultUpdatingColumns)
refName = rel.Name + "."
)
for name, ok := range selectColumns {
columnName := ""
if strings.HasPrefix(name, refName) {
columnName = strings.TrimPrefix(name, refName)
}
if columnName != "" {
if ok {
selects = append(selects, columnName)
} else {
omits = append(omits, columnName)
}
}
}
tx := db.Session(&gorm.Session{NewDB: true}).Clauses(onConflict).Session(&gorm.Session{
FullSaveAssociations: db.FullSaveAssociations,
SkipHooks: db.Statement.SkipHooks,
DisableNestedTransaction: true,
})
db.Statement.Settings.Range(func(k, v interface{}) bool {
tx.Statement.Settings.Store(k, v)
return true
})
if tx.Statement.FullSaveAssociations {
tx = tx.InstanceSet("gorm:update_track_time", true)
}
if len(selects) > 0 {
tx = tx.Select(selects)
} else if restricted && len(omits) == 0 {
tx = tx.Omit(clause.Associations)
}
if len(omits) > 0 {
tx = tx.Omit(omits...)
}
return db.AddError(tx.Create(values).Error)
}

83
vendor/gorm.io/gorm/callbacks/callbacks.go generated vendored Normal file
View file

@ -0,0 +1,83 @@
package callbacks
import (
"gorm.io/gorm"
)
var (
createClauses = []string{"INSERT", "VALUES", "ON CONFLICT"}
queryClauses = []string{"SELECT", "FROM", "WHERE", "GROUP BY", "ORDER BY", "LIMIT", "FOR"}
updateClauses = []string{"UPDATE", "SET", "WHERE"}
deleteClauses = []string{"DELETE", "FROM", "WHERE"}
)
type Config struct {
LastInsertIDReversed bool
WithReturning bool
CreateClauses []string
QueryClauses []string
UpdateClauses []string
DeleteClauses []string
}
func RegisterDefaultCallbacks(db *gorm.DB, config *Config) {
enableTransaction := func(db *gorm.DB) bool {
return !db.SkipDefaultTransaction
}
createCallback := db.Callback().Create()
createCallback.Match(enableTransaction).Register("gorm:begin_transaction", BeginTransaction)
createCallback.Register("gorm:before_create", BeforeCreate)
createCallback.Register("gorm:save_before_associations", SaveBeforeAssociations(true))
createCallback.Register("gorm:create", Create(config))
createCallback.Register("gorm:save_after_associations", SaveAfterAssociations(true))
createCallback.Register("gorm:after_create", AfterCreate)
createCallback.Match(enableTransaction).Register("gorm:commit_or_rollback_transaction", CommitOrRollbackTransaction)
if len(config.CreateClauses) == 0 {
config.CreateClauses = createClauses
}
createCallback.Clauses = config.CreateClauses
queryCallback := db.Callback().Query()
queryCallback.Register("gorm:query", Query)
queryCallback.Register("gorm:preload", Preload)
queryCallback.Register("gorm:after_query", AfterQuery)
if len(config.QueryClauses) == 0 {
config.QueryClauses = queryClauses
}
queryCallback.Clauses = config.QueryClauses
deleteCallback := db.Callback().Delete()
deleteCallback.Match(enableTransaction).Register("gorm:begin_transaction", BeginTransaction)
deleteCallback.Register("gorm:before_delete", BeforeDelete)
deleteCallback.Register("gorm:delete_before_associations", DeleteBeforeAssociations)
deleteCallback.Register("gorm:delete", Delete)
deleteCallback.Register("gorm:after_delete", AfterDelete)
deleteCallback.Match(enableTransaction).Register("gorm:commit_or_rollback_transaction", CommitOrRollbackTransaction)
if len(config.DeleteClauses) == 0 {
config.DeleteClauses = deleteClauses
}
deleteCallback.Clauses = config.DeleteClauses
updateCallback := db.Callback().Update()
updateCallback.Match(enableTransaction).Register("gorm:begin_transaction", BeginTransaction)
updateCallback.Register("gorm:setup_reflect_value", SetupUpdateReflectValue)
updateCallback.Register("gorm:before_update", BeforeUpdate)
updateCallback.Register("gorm:save_before_associations", SaveBeforeAssociations(false))
updateCallback.Register("gorm:update", Update)
updateCallback.Register("gorm:save_after_associations", SaveAfterAssociations(false))
updateCallback.Register("gorm:after_update", AfterUpdate)
updateCallback.Match(enableTransaction).Register("gorm:commit_or_rollback_transaction", CommitOrRollbackTransaction)
if len(config.UpdateClauses) == 0 {
config.UpdateClauses = updateClauses
}
updateCallback.Clauses = config.UpdateClauses
rowCallback := db.Callback().Row()
rowCallback.Register("gorm:row", RowQuery)
rowCallback.Clauses = config.QueryClauses
rawCallback := db.Callback().Raw()
rawCallback.Register("gorm:raw", RawExec)
rawCallback.Clauses = config.QueryClauses
}

23
vendor/gorm.io/gorm/callbacks/callmethod.go generated vendored Normal file
View file

@ -0,0 +1,23 @@
package callbacks
import (
"reflect"
"gorm.io/gorm"
)
func callMethod(db *gorm.DB, fc func(value interface{}, tx *gorm.DB) bool) {
tx := db.Session(&gorm.Session{NewDB: true})
if called := fc(db.Statement.ReflectValue.Interface(), tx); !called {
switch db.Statement.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
db.Statement.CurDestIndex = 0
for i := 0; i < db.Statement.ReflectValue.Len(); i++ {
fc(reflect.Indirect(db.Statement.ReflectValue.Index(i)).Addr().Interface(), tx)
db.Statement.CurDestIndex++
}
case reflect.Struct:
fc(db.Statement.ReflectValue.Addr().Interface(), tx)
}
}
}

370
vendor/gorm.io/gorm/callbacks/create.go generated vendored Normal file
View file

@ -0,0 +1,370 @@
package callbacks
import (
"fmt"
"reflect"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"gorm.io/gorm/schema"
)
func BeforeCreate(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil && !db.Statement.SkipHooks && (db.Statement.Schema.BeforeSave || db.Statement.Schema.BeforeCreate) {
callMethod(db, func(value interface{}, tx *gorm.DB) (called bool) {
if db.Statement.Schema.BeforeSave {
if i, ok := value.(BeforeSaveInterface); ok {
called = true
db.AddError(i.BeforeSave(tx))
}
}
if db.Statement.Schema.BeforeCreate {
if i, ok := value.(BeforeCreateInterface); ok {
called = true
db.AddError(i.BeforeCreate(tx))
}
}
return called
})
}
}
func Create(config *Config) func(db *gorm.DB) {
if config.WithReturning {
return CreateWithReturning
} else {
return func(db *gorm.DB) {
if db.Error == nil {
if db.Statement.Schema != nil && !db.Statement.Unscoped {
for _, c := range db.Statement.Schema.CreateClauses {
db.Statement.AddClause(c)
}
}
if db.Statement.SQL.String() == "" {
db.Statement.SQL.Grow(180)
db.Statement.AddClauseIfNotExists(clause.Insert{})
db.Statement.AddClause(ConvertToCreateValues(db.Statement))
db.Statement.Build(db.Statement.BuildClauses...)
}
if !db.DryRun && db.Error == nil {
result, err := db.Statement.ConnPool.ExecContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
if err == nil {
db.RowsAffected, _ = result.RowsAffected()
if db.RowsAffected > 0 {
if db.Statement.Schema != nil && db.Statement.Schema.PrioritizedPrimaryField != nil && db.Statement.Schema.PrioritizedPrimaryField.HasDefaultValue {
if insertID, err := result.LastInsertId(); err == nil && insertID > 0 {
switch db.Statement.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
if config.LastInsertIDReversed {
for i := db.Statement.ReflectValue.Len() - 1; i >= 0; i-- {
rv := db.Statement.ReflectValue.Index(i)
if reflect.Indirect(rv).Kind() != reflect.Struct {
break
}
_, isZero := db.Statement.Schema.PrioritizedPrimaryField.ValueOf(rv)
if isZero {
db.Statement.Schema.PrioritizedPrimaryField.Set(rv, insertID)
insertID -= db.Statement.Schema.PrioritizedPrimaryField.AutoIncrementIncrement
}
}
} else {
for i := 0; i < db.Statement.ReflectValue.Len(); i++ {
rv := db.Statement.ReflectValue.Index(i)
if reflect.Indirect(rv).Kind() != reflect.Struct {
break
}
if _, isZero := db.Statement.Schema.PrioritizedPrimaryField.ValueOf(rv); isZero {
db.Statement.Schema.PrioritizedPrimaryField.Set(rv, insertID)
insertID += db.Statement.Schema.PrioritizedPrimaryField.AutoIncrementIncrement
}
}
}
case reflect.Struct:
if _, isZero := db.Statement.Schema.PrioritizedPrimaryField.ValueOf(db.Statement.ReflectValue); isZero {
db.Statement.Schema.PrioritizedPrimaryField.Set(db.Statement.ReflectValue, insertID)
}
}
} else {
db.AddError(err)
}
}
}
} else {
db.AddError(err)
}
}
}
}
}
}
func CreateWithReturning(db *gorm.DB) {
if db.Error == nil {
if db.Statement.Schema != nil && !db.Statement.Unscoped {
for _, c := range db.Statement.Schema.CreateClauses {
db.Statement.AddClause(c)
}
}
if db.Statement.SQL.String() == "" {
db.Statement.AddClauseIfNotExists(clause.Insert{})
db.Statement.AddClause(ConvertToCreateValues(db.Statement))
db.Statement.Build(db.Statement.BuildClauses...)
}
if sch := db.Statement.Schema; sch != nil && len(sch.FieldsWithDefaultDBValue) > 0 {
db.Statement.WriteString(" RETURNING ")
var (
fields = make([]*schema.Field, len(sch.FieldsWithDefaultDBValue))
values = make([]interface{}, len(sch.FieldsWithDefaultDBValue))
)
for idx, field := range sch.FieldsWithDefaultDBValue {
if idx > 0 {
db.Statement.WriteByte(',')
}
fields[idx] = field
db.Statement.WriteQuoted(field.DBName)
}
if !db.DryRun && db.Error == nil {
db.RowsAffected = 0
rows, err := db.Statement.ConnPool.QueryContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
if err == nil {
defer rows.Close()
switch db.Statement.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
c := db.Statement.Clauses["ON CONFLICT"]
onConflict, _ := c.Expression.(clause.OnConflict)
for rows.Next() {
BEGIN:
reflectValue := db.Statement.ReflectValue.Index(int(db.RowsAffected))
if reflect.Indirect(reflectValue).Kind() != reflect.Struct {
break
}
for idx, field := range fields {
fieldValue := field.ReflectValueOf(reflectValue)
if onConflict.DoNothing && !fieldValue.IsZero() {
db.RowsAffected++
if int(db.RowsAffected) >= db.Statement.ReflectValue.Len() {
return
}
goto BEGIN
}
values[idx] = fieldValue.Addr().Interface()
}
db.RowsAffected++
if err := rows.Scan(values...); err != nil {
db.AddError(err)
}
}
case reflect.Struct:
for idx, field := range fields {
values[idx] = field.ReflectValueOf(db.Statement.ReflectValue).Addr().Interface()
}
if rows.Next() {
db.RowsAffected++
db.AddError(rows.Scan(values...))
}
}
} else {
db.AddError(err)
}
}
} else if !db.DryRun && db.Error == nil {
if result, err := db.Statement.ConnPool.ExecContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...); err == nil {
db.RowsAffected, _ = result.RowsAffected()
} else {
db.AddError(err)
}
}
}
}
func AfterCreate(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil && !db.Statement.SkipHooks && (db.Statement.Schema.AfterSave || db.Statement.Schema.AfterCreate) {
callMethod(db, func(value interface{}, tx *gorm.DB) (called bool) {
if db.Statement.Schema.AfterSave {
if i, ok := value.(AfterSaveInterface); ok {
called = true
db.AddError(i.AfterSave(tx))
}
}
if db.Statement.Schema.AfterCreate {
if i, ok := value.(AfterCreateInterface); ok {
called = true
db.AddError(i.AfterCreate(tx))
}
}
return called
})
}
}
// ConvertToCreateValues convert to create values
func ConvertToCreateValues(stmt *gorm.Statement) (values clause.Values) {
switch value := stmt.Dest.(type) {
case map[string]interface{}:
values = ConvertMapToValuesForCreate(stmt, value)
case *map[string]interface{}:
values = ConvertMapToValuesForCreate(stmt, *value)
case []map[string]interface{}:
values = ConvertSliceOfMapToValuesForCreate(stmt, value)
case *[]map[string]interface{}:
values = ConvertSliceOfMapToValuesForCreate(stmt, *value)
default:
var (
selectColumns, restricted = stmt.SelectAndOmitColumns(true, false)
curTime = stmt.DB.NowFunc()
isZero bool
)
values = clause.Values{Columns: make([]clause.Column, 0, len(stmt.Schema.DBNames))}
for _, db := range stmt.Schema.DBNames {
if field := stmt.Schema.FieldsByDBName[db]; !field.HasDefaultValue || field.DefaultValueInterface != nil {
if v, ok := selectColumns[db]; (ok && v) || (!ok && (!restricted || field.AutoCreateTime > 0 || field.AutoUpdateTime > 0)) {
values.Columns = append(values.Columns, clause.Column{Name: db})
}
}
}
switch stmt.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
stmt.SQL.Grow(stmt.ReflectValue.Len() * 18)
values.Values = make([][]interface{}, stmt.ReflectValue.Len())
defaultValueFieldsHavingValue := map[*schema.Field][]interface{}{}
if stmt.ReflectValue.Len() == 0 {
stmt.AddError(gorm.ErrEmptySlice)
return
}
for i := 0; i < stmt.ReflectValue.Len(); i++ {
rv := reflect.Indirect(stmt.ReflectValue.Index(i))
if !rv.IsValid() {
stmt.AddError(fmt.Errorf("slice data #%v is invalid: %w", i, gorm.ErrInvalidData))
return
}
values.Values[i] = make([]interface{}, len(values.Columns))
for idx, column := range values.Columns {
field := stmt.Schema.FieldsByDBName[column.Name]
if values.Values[i][idx], isZero = field.ValueOf(rv); isZero {
if field.DefaultValueInterface != nil {
values.Values[i][idx] = field.DefaultValueInterface
field.Set(rv, field.DefaultValueInterface)
} else if field.AutoCreateTime > 0 || field.AutoUpdateTime > 0 {
field.Set(rv, curTime)
values.Values[i][idx], _ = field.ValueOf(rv)
}
} else if field.AutoUpdateTime > 0 {
if _, ok := stmt.DB.InstanceGet("gorm:update_track_time"); ok {
field.Set(rv, curTime)
values.Values[i][idx], _ = field.ValueOf(rv)
}
}
}
for _, field := range stmt.Schema.FieldsWithDefaultDBValue {
if v, ok := selectColumns[field.DBName]; (ok && v) || (!ok && !restricted) {
if v, isZero := field.ValueOf(rv); !isZero {
if len(defaultValueFieldsHavingValue[field]) == 0 {
defaultValueFieldsHavingValue[field] = make([]interface{}, stmt.ReflectValue.Len())
}
defaultValueFieldsHavingValue[field][i] = v
}
}
}
}
for field, vs := range defaultValueFieldsHavingValue {
values.Columns = append(values.Columns, clause.Column{Name: field.DBName})
for idx := range values.Values {
if vs[idx] == nil {
values.Values[idx] = append(values.Values[idx], stmt.Dialector.DefaultValueOf(field))
} else {
values.Values[idx] = append(values.Values[idx], vs[idx])
}
}
}
case reflect.Struct:
values.Values = [][]interface{}{make([]interface{}, len(values.Columns))}
for idx, column := range values.Columns {
field := stmt.Schema.FieldsByDBName[column.Name]
if values.Values[0][idx], isZero = field.ValueOf(stmt.ReflectValue); isZero {
if field.DefaultValueInterface != nil {
values.Values[0][idx] = field.DefaultValueInterface
field.Set(stmt.ReflectValue, field.DefaultValueInterface)
} else if field.AutoCreateTime > 0 || field.AutoUpdateTime > 0 {
field.Set(stmt.ReflectValue, curTime)
values.Values[0][idx], _ = field.ValueOf(stmt.ReflectValue)
}
} else if field.AutoUpdateTime > 0 {
if _, ok := stmt.DB.InstanceGet("gorm:update_track_time"); ok {
field.Set(stmt.ReflectValue, curTime)
values.Values[0][idx], _ = field.ValueOf(stmt.ReflectValue)
}
}
}
for _, field := range stmt.Schema.FieldsWithDefaultDBValue {
if v, ok := selectColumns[field.DBName]; (ok && v) || (!ok && !restricted) {
if v, isZero := field.ValueOf(stmt.ReflectValue); !isZero {
values.Columns = append(values.Columns, clause.Column{Name: field.DBName})
values.Values[0] = append(values.Values[0], v)
}
}
}
default:
stmt.AddError(gorm.ErrInvalidData)
}
}
if c, ok := stmt.Clauses["ON CONFLICT"]; ok {
if onConflict, _ := c.Expression.(clause.OnConflict); onConflict.UpdateAll {
if stmt.Schema != nil && len(values.Columns) > 1 {
columns := make([]string, 0, len(values.Columns)-1)
for _, column := range values.Columns {
if field := stmt.Schema.LookUpField(column.Name); field != nil {
if !field.PrimaryKey && (!field.HasDefaultValue || field.DefaultValueInterface != nil) && field.AutoCreateTime == 0 {
columns = append(columns, column.Name)
}
}
}
onConflict.DoUpdates = clause.AssignmentColumns(columns)
// use primary fields as default OnConflict columns
if len(onConflict.Columns) == 0 {
for _, field := range stmt.Schema.PrimaryFields {
onConflict.Columns = append(onConflict.Columns, clause.Column{Name: field.DBName})
}
}
stmt.AddClause(onConflict)
}
}
}
return values
}

168
vendor/gorm.io/gorm/callbacks/delete.go generated vendored Normal file
View file

@ -0,0 +1,168 @@
package callbacks
import (
"reflect"
"strings"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"gorm.io/gorm/schema"
)
func BeforeDelete(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil && !db.Statement.SkipHooks && db.Statement.Schema.BeforeDelete {
callMethod(db, func(value interface{}, tx *gorm.DB) bool {
if i, ok := value.(BeforeDeleteInterface); ok {
db.AddError(i.BeforeDelete(tx))
return true
}
return false
})
}
}
func DeleteBeforeAssociations(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil {
selectColumns, restricted := db.Statement.SelectAndOmitColumns(true, false)
if restricted {
for column, v := range selectColumns {
if v {
if rel, ok := db.Statement.Schema.Relationships.Relations[column]; ok {
switch rel.Type {
case schema.HasOne, schema.HasMany:
queryConds := rel.ToQueryConditions(db.Statement.ReflectValue)
modelValue := reflect.New(rel.FieldSchema.ModelType).Interface()
tx := db.Session(&gorm.Session{NewDB: true}).Model(modelValue)
withoutConditions := false
if db.Statement.Unscoped {
tx = tx.Unscoped()
}
if len(db.Statement.Selects) > 0 {
selects := make([]string, 0, len(db.Statement.Selects))
for _, s := range db.Statement.Selects {
if s == clause.Associations {
selects = append(selects, s)
} else if strings.HasPrefix(s, column+".") {
selects = append(selects, strings.TrimPrefix(s, column+"."))
}
}
if len(selects) > 0 {
tx = tx.Select(selects)
}
}
for _, cond := range queryConds {
if c, ok := cond.(clause.IN); ok && len(c.Values) == 0 {
withoutConditions = true
break
}
}
if !withoutConditions {
if db.AddError(tx.Clauses(clause.Where{Exprs: queryConds}).Delete(modelValue).Error) != nil {
return
}
}
case schema.Many2Many:
var (
queryConds = make([]clause.Expression, 0, len(rel.References))
foreignFields = make([]*schema.Field, 0, len(rel.References))
relForeignKeys = make([]string, 0, len(rel.References))
modelValue = reflect.New(rel.JoinTable.ModelType).Interface()
table = rel.JoinTable.Table
tx = db.Session(&gorm.Session{NewDB: true}).Model(modelValue).Table(table)
)
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
foreignFields = append(foreignFields, ref.PrimaryKey)
relForeignKeys = append(relForeignKeys, ref.ForeignKey.DBName)
} else if ref.PrimaryValue != "" {
queryConds = append(queryConds, clause.Eq{
Column: clause.Column{Table: rel.JoinTable.Table, Name: ref.ForeignKey.DBName},
Value: ref.PrimaryValue,
})
}
}
_, foreignValues := schema.GetIdentityFieldValuesMap(db.Statement.ReflectValue, foreignFields)
column, values := schema.ToQueryValues(table, relForeignKeys, foreignValues)
queryConds = append(queryConds, clause.IN{Column: column, Values: values})
if db.AddError(tx.Clauses(clause.Where{Exprs: queryConds}).Delete(modelValue).Error) != nil {
return
}
}
}
}
}
}
}
}
func Delete(db *gorm.DB) {
if db.Error == nil {
if db.Statement.Schema != nil && !db.Statement.Unscoped {
for _, c := range db.Statement.Schema.DeleteClauses {
db.Statement.AddClause(c)
}
}
if db.Statement.SQL.String() == "" {
db.Statement.SQL.Grow(100)
db.Statement.AddClauseIfNotExists(clause.Delete{})
if db.Statement.Schema != nil {
_, queryValues := schema.GetIdentityFieldValuesMap(db.Statement.ReflectValue, db.Statement.Schema.PrimaryFields)
column, values := schema.ToQueryValues(db.Statement.Table, db.Statement.Schema.PrimaryFieldDBNames, queryValues)
if len(values) > 0 {
db.Statement.AddClause(clause.Where{Exprs: []clause.Expression{clause.IN{Column: column, Values: values}}})
}
if db.Statement.ReflectValue.CanAddr() && db.Statement.Dest != db.Statement.Model && db.Statement.Model != nil {
_, queryValues = schema.GetIdentityFieldValuesMap(reflect.ValueOf(db.Statement.Model), db.Statement.Schema.PrimaryFields)
column, values = schema.ToQueryValues(db.Statement.Table, db.Statement.Schema.PrimaryFieldDBNames, queryValues)
if len(values) > 0 {
db.Statement.AddClause(clause.Where{Exprs: []clause.Expression{clause.IN{Column: column, Values: values}}})
}
}
}
db.Statement.AddClauseIfNotExists(clause.From{})
db.Statement.Build(db.Statement.BuildClauses...)
}
if _, ok := db.Statement.Clauses["WHERE"]; !db.AllowGlobalUpdate && !ok && db.Error == nil {
db.AddError(gorm.ErrMissingWhereClause)
return
}
if !db.DryRun && db.Error == nil {
result, err := db.Statement.ConnPool.ExecContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
if err == nil {
db.RowsAffected, _ = result.RowsAffected()
} else {
db.AddError(err)
}
}
}
}
func AfterDelete(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil && !db.Statement.SkipHooks && db.Statement.Schema.AfterDelete {
callMethod(db, func(value interface{}, tx *gorm.DB) bool {
if i, ok := value.(AfterDeleteInterface); ok {
db.AddError(i.AfterDelete(tx))
return true
}
return false
})
}
}

95
vendor/gorm.io/gorm/callbacks/helper.go generated vendored Normal file
View file

@ -0,0 +1,95 @@
package callbacks
import (
"sort"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
// ConvertMapToValuesForCreate convert map to values
func ConvertMapToValuesForCreate(stmt *gorm.Statement, mapValue map[string]interface{}) (values clause.Values) {
values.Columns = make([]clause.Column, 0, len(mapValue))
selectColumns, restricted := stmt.SelectAndOmitColumns(true, false)
var keys = make([]string, 0, len(mapValue))
for k := range mapValue {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
value := mapValue[k]
if stmt.Schema != nil {
if field := stmt.Schema.LookUpField(k); field != nil {
k = field.DBName
}
}
if v, ok := selectColumns[k]; (ok && v) || (!ok && !restricted) {
values.Columns = append(values.Columns, clause.Column{Name: k})
if len(values.Values) == 0 {
values.Values = [][]interface{}{{}}
}
values.Values[0] = append(values.Values[0], value)
}
}
return
}
// ConvertSliceOfMapToValuesForCreate convert slice of map to values
func ConvertSliceOfMapToValuesForCreate(stmt *gorm.Statement, mapValues []map[string]interface{}) (values clause.Values) {
var (
columns = make([]string, 0, len(mapValues))
)
// when the length of mapValues,return directly here
// no need to call stmt.SelectAndOmitColumns method
if len(mapValues) == 0 {
stmt.AddError(gorm.ErrEmptySlice)
return
}
var (
result = make(map[string][]interface{}, len(mapValues))
selectColumns, restricted = stmt.SelectAndOmitColumns(true, false)
)
for idx, mapValue := range mapValues {
for k, v := range mapValue {
if stmt.Schema != nil {
if field := stmt.Schema.LookUpField(k); field != nil {
k = field.DBName
}
}
if _, ok := result[k]; !ok {
if v, ok := selectColumns[k]; (ok && v) || (!ok && !restricted) {
result[k] = make([]interface{}, len(mapValues))
columns = append(columns, k)
} else {
continue
}
}
result[k][idx] = v
}
}
sort.Strings(columns)
values.Values = make([][]interface{}, len(mapValues))
values.Columns = make([]clause.Column, len(columns))
for idx, column := range columns {
values.Columns[idx] = clause.Column{Name: column}
for i, v := range result[column] {
if len(values.Values[i]) == 0 {
values.Values[i] = make([]interface{}, len(columns))
}
values.Values[i][idx] = v
}
}
return
}

39
vendor/gorm.io/gorm/callbacks/interfaces.go generated vendored Normal file
View file

@ -0,0 +1,39 @@
package callbacks
import "gorm.io/gorm"
type BeforeCreateInterface interface {
BeforeCreate(*gorm.DB) error
}
type AfterCreateInterface interface {
AfterCreate(*gorm.DB) error
}
type BeforeUpdateInterface interface {
BeforeUpdate(*gorm.DB) error
}
type AfterUpdateInterface interface {
AfterUpdate(*gorm.DB) error
}
type BeforeSaveInterface interface {
BeforeSave(*gorm.DB) error
}
type AfterSaveInterface interface {
AfterSave(*gorm.DB) error
}
type BeforeDeleteInterface interface {
BeforeDelete(*gorm.DB) error
}
type AfterDeleteInterface interface {
AfterDelete(*gorm.DB) error
}
type AfterFindInterface interface {
AfterFind(*gorm.DB) error
}

164
vendor/gorm.io/gorm/callbacks/preload.go generated vendored Normal file
View file

@ -0,0 +1,164 @@
package callbacks
import (
"reflect"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"gorm.io/gorm/schema"
"gorm.io/gorm/utils"
)
func preload(db *gorm.DB, rel *schema.Relationship, conds []interface{}, preloads map[string][]interface{}) {
var (
reflectValue = db.Statement.ReflectValue
tx = db.Session(&gorm.Session{NewDB: true}).Model(nil).Session(&gorm.Session{SkipHooks: db.Statement.SkipHooks})
relForeignKeys []string
relForeignFields []*schema.Field
foreignFields []*schema.Field
foreignValues [][]interface{}
identityMap = map[string][]reflect.Value{}
inlineConds []interface{}
)
db.Statement.Settings.Range(func(k, v interface{}) bool {
tx.Statement.Settings.Store(k, v)
return true
})
if rel.JoinTable != nil {
var (
joinForeignFields = make([]*schema.Field, 0, len(rel.References))
joinRelForeignFields = make([]*schema.Field, 0, len(rel.References))
joinForeignKeys = make([]string, 0, len(rel.References))
)
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
joinForeignKeys = append(joinForeignKeys, ref.ForeignKey.DBName)
joinForeignFields = append(joinForeignFields, ref.ForeignKey)
foreignFields = append(foreignFields, ref.PrimaryKey)
} else if ref.PrimaryValue != "" {
tx = tx.Where(clause.Eq{Column: ref.ForeignKey.DBName, Value: ref.PrimaryValue})
} else {
joinRelForeignFields = append(joinRelForeignFields, ref.ForeignKey)
relForeignKeys = append(relForeignKeys, ref.PrimaryKey.DBName)
relForeignFields = append(relForeignFields, ref.PrimaryKey)
}
}
joinIdentityMap, joinForeignValues := schema.GetIdentityFieldValuesMap(reflectValue, foreignFields)
if len(joinForeignValues) == 0 {
return
}
joinResults := rel.JoinTable.MakeSlice().Elem()
column, values := schema.ToQueryValues(clause.CurrentTable, joinForeignKeys, joinForeignValues)
db.AddError(tx.Where(clause.IN{Column: column, Values: values}).Find(joinResults.Addr().Interface()).Error)
// convert join identity map to relation identity map
fieldValues := make([]interface{}, len(joinForeignFields))
joinFieldValues := make([]interface{}, len(joinRelForeignFields))
for i := 0; i < joinResults.Len(); i++ {
for idx, field := range joinForeignFields {
fieldValues[idx], _ = field.ValueOf(joinResults.Index(i))
}
for idx, field := range joinRelForeignFields {
joinFieldValues[idx], _ = field.ValueOf(joinResults.Index(i))
}
if results, ok := joinIdentityMap[utils.ToStringKey(fieldValues...)]; ok {
joinKey := utils.ToStringKey(joinFieldValues...)
identityMap[joinKey] = append(identityMap[joinKey], results...)
}
}
_, foreignValues = schema.GetIdentityFieldValuesMap(joinResults, joinRelForeignFields)
} else {
for _, ref := range rel.References {
if ref.OwnPrimaryKey {
relForeignKeys = append(relForeignKeys, ref.ForeignKey.DBName)
relForeignFields = append(relForeignFields, ref.ForeignKey)
foreignFields = append(foreignFields, ref.PrimaryKey)
} else if ref.PrimaryValue != "" {
tx = tx.Where(clause.Eq{Column: ref.ForeignKey.DBName, Value: ref.PrimaryValue})
} else {
relForeignKeys = append(relForeignKeys, ref.PrimaryKey.DBName)
relForeignFields = append(relForeignFields, ref.PrimaryKey)
foreignFields = append(foreignFields, ref.ForeignKey)
}
}
identityMap, foreignValues = schema.GetIdentityFieldValuesMap(reflectValue, foreignFields)
if len(foreignValues) == 0 {
return
}
}
// nested preload
for p, pvs := range preloads {
tx = tx.Preload(p, pvs...)
}
reflectResults := rel.FieldSchema.MakeSlice().Elem()
column, values := schema.ToQueryValues(clause.CurrentTable, relForeignKeys, foreignValues)
for _, cond := range conds {
if fc, ok := cond.(func(*gorm.DB) *gorm.DB); ok {
tx = fc(tx)
} else {
inlineConds = append(inlineConds, cond)
}
}
db.AddError(tx.Where(clause.IN{Column: column, Values: values}).Find(reflectResults.Addr().Interface(), inlineConds...).Error)
fieldValues := make([]interface{}, len(relForeignFields))
// clean up old values before preloading
switch reflectValue.Kind() {
case reflect.Struct:
switch rel.Type {
case schema.HasMany, schema.Many2Many:
rel.Field.Set(reflectValue, reflect.MakeSlice(rel.Field.IndirectFieldType, 0, 10).Interface())
default:
rel.Field.Set(reflectValue, reflect.New(rel.Field.FieldType).Interface())
}
case reflect.Slice, reflect.Array:
for i := 0; i < reflectValue.Len(); i++ {
switch rel.Type {
case schema.HasMany, schema.Many2Many:
rel.Field.Set(reflectValue.Index(i), reflect.MakeSlice(rel.Field.IndirectFieldType, 0, 10).Interface())
default:
rel.Field.Set(reflectValue.Index(i), reflect.New(rel.Field.FieldType).Interface())
}
}
}
for i := 0; i < reflectResults.Len(); i++ {
elem := reflectResults.Index(i)
for idx, field := range relForeignFields {
fieldValues[idx], _ = field.ValueOf(elem)
}
for _, data := range identityMap[utils.ToStringKey(fieldValues...)] {
reflectFieldValue := rel.Field.ReflectValueOf(data)
if reflectFieldValue.Kind() == reflect.Ptr && reflectFieldValue.IsNil() {
reflectFieldValue.Set(reflect.New(rel.Field.FieldType.Elem()))
}
reflectFieldValue = reflect.Indirect(reflectFieldValue)
switch reflectFieldValue.Kind() {
case reflect.Struct:
rel.Field.Set(data, reflectResults.Index(i).Interface())
case reflect.Slice, reflect.Array:
if reflectFieldValue.Type().Elem().Kind() == reflect.Ptr {
rel.Field.Set(data, reflect.Append(reflectFieldValue, elem).Interface())
} else {
rel.Field.Set(data, reflect.Append(reflectFieldValue, elem.Elem()).Interface())
}
}
}
}
}

228
vendor/gorm.io/gorm/callbacks/query.go generated vendored Normal file
View file

@ -0,0 +1,228 @@
package callbacks
import (
"fmt"
"reflect"
"sort"
"strings"
"gorm.io/gorm"
"gorm.io/gorm/clause"
)
func Query(db *gorm.DB) {
if db.Error == nil {
BuildQuerySQL(db)
if !db.DryRun && db.Error == nil {
rows, err := db.Statement.ConnPool.QueryContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
if err != nil {
db.AddError(err)
return
}
defer rows.Close()
gorm.Scan(rows, db, false)
}
}
}
func BuildQuerySQL(db *gorm.DB) {
if db.Statement.Schema != nil && !db.Statement.Unscoped {
for _, c := range db.Statement.Schema.QueryClauses {
db.Statement.AddClause(c)
}
}
if db.Statement.SQL.String() == "" {
db.Statement.SQL.Grow(100)
clauseSelect := clause.Select{Distinct: db.Statement.Distinct}
if db.Statement.ReflectValue.Kind() == reflect.Struct && db.Statement.ReflectValue.Type() == db.Statement.Schema.ModelType {
var conds []clause.Expression
for _, primaryField := range db.Statement.Schema.PrimaryFields {
if v, isZero := primaryField.ValueOf(db.Statement.ReflectValue); !isZero {
conds = append(conds, clause.Eq{Column: clause.Column{Table: db.Statement.Table, Name: primaryField.DBName}, Value: v})
}
}
if len(conds) > 0 {
db.Statement.AddClause(clause.Where{Exprs: conds})
}
}
if len(db.Statement.Selects) > 0 {
clauseSelect.Columns = make([]clause.Column, len(db.Statement.Selects))
for idx, name := range db.Statement.Selects {
if db.Statement.Schema == nil {
clauseSelect.Columns[idx] = clause.Column{Name: name, Raw: true}
} else if f := db.Statement.Schema.LookUpField(name); f != nil {
clauseSelect.Columns[idx] = clause.Column{Name: f.DBName}
} else {
clauseSelect.Columns[idx] = clause.Column{Name: name, Raw: true}
}
}
} else if db.Statement.Schema != nil && len(db.Statement.Omits) > 0 {
selectColumns, _ := db.Statement.SelectAndOmitColumns(false, false)
clauseSelect.Columns = make([]clause.Column, 0, len(db.Statement.Schema.DBNames))
for _, dbName := range db.Statement.Schema.DBNames {
if v, ok := selectColumns[dbName]; (ok && v) || !ok {
clauseSelect.Columns = append(clauseSelect.Columns, clause.Column{Table: db.Statement.Table, Name: dbName})
}
}
} else if db.Statement.Schema != nil && db.Statement.ReflectValue.IsValid() {
queryFields := db.QueryFields
if !queryFields {
switch db.Statement.ReflectValue.Kind() {
case reflect.Struct:
queryFields = db.Statement.ReflectValue.Type() != db.Statement.Schema.ModelType
case reflect.Slice:
queryFields = db.Statement.ReflectValue.Type().Elem() != db.Statement.Schema.ModelType
}
}
if queryFields {
stmt := gorm.Statement{DB: db}
// smaller struct
if err := stmt.Parse(db.Statement.Dest); err == nil && (db.QueryFields || stmt.Schema.ModelType != db.Statement.Schema.ModelType) {
clauseSelect.Columns = make([]clause.Column, len(stmt.Schema.DBNames))
for idx, dbName := range stmt.Schema.DBNames {
clauseSelect.Columns[idx] = clause.Column{Table: db.Statement.Table, Name: dbName}
}
}
}
}
// inline joins
if len(db.Statement.Joins) != 0 {
if len(db.Statement.Selects) == 0 && db.Statement.Schema != nil {
clauseSelect.Columns = make([]clause.Column, len(db.Statement.Schema.DBNames))
for idx, dbName := range db.Statement.Schema.DBNames {
clauseSelect.Columns[idx] = clause.Column{Table: db.Statement.Table, Name: dbName}
}
}
joins := []clause.Join{}
if fromClause, ok := db.Statement.Clauses["FROM"].Expression.(clause.From); ok {
joins = fromClause.Joins
}
for _, join := range db.Statement.Joins {
if db.Statement.Schema == nil {
joins = append(joins, clause.Join{
Expression: clause.NamedExpr{SQL: join.Name, Vars: join.Conds},
})
} else if relation, ok := db.Statement.Schema.Relationships.Relations[join.Name]; ok {
tableAliasName := relation.Name
for _, s := range relation.FieldSchema.DBNames {
clauseSelect.Columns = append(clauseSelect.Columns, clause.Column{
Table: tableAliasName,
Name: s,
Alias: tableAliasName + "__" + s,
})
}
exprs := make([]clause.Expression, len(relation.References))
for idx, ref := range relation.References {
if ref.OwnPrimaryKey {
exprs[idx] = clause.Eq{
Column: clause.Column{Table: clause.CurrentTable, Name: ref.PrimaryKey.DBName},
Value: clause.Column{Table: tableAliasName, Name: ref.ForeignKey.DBName},
}
} else {
if ref.PrimaryValue == "" {
exprs[idx] = clause.Eq{
Column: clause.Column{Table: clause.CurrentTable, Name: ref.ForeignKey.DBName},
Value: clause.Column{Table: tableAliasName, Name: ref.PrimaryKey.DBName},
}
} else {
exprs[idx] = clause.Eq{
Column: clause.Column{Table: tableAliasName, Name: ref.ForeignKey.DBName},
Value: ref.PrimaryValue,
}
}
}
}
joins = append(joins, clause.Join{
Type: clause.LeftJoin,
Table: clause.Table{Name: relation.FieldSchema.Table, Alias: tableAliasName},
ON: clause.Where{Exprs: exprs},
})
} else {
joins = append(joins, clause.Join{
Expression: clause.NamedExpr{SQL: join.Name, Vars: join.Conds},
})
}
}
db.Statement.Joins = nil
db.Statement.AddClause(clause.From{Joins: joins})
} else {
db.Statement.AddClauseIfNotExists(clause.From{})
}
db.Statement.AddClauseIfNotExists(clauseSelect)
db.Statement.Build(db.Statement.BuildClauses...)
}
}
func Preload(db *gorm.DB) {
if db.Error == nil && len(db.Statement.Preloads) > 0 {
preloadMap := map[string]map[string][]interface{}{}
for name := range db.Statement.Preloads {
preloadFields := strings.Split(name, ".")
if preloadFields[0] == clause.Associations {
for _, rel := range db.Statement.Schema.Relationships.Relations {
if rel.Schema == db.Statement.Schema {
if _, ok := preloadMap[rel.Name]; !ok {
preloadMap[rel.Name] = map[string][]interface{}{}
}
if value := strings.TrimPrefix(strings.TrimPrefix(name, preloadFields[0]), "."); value != "" {
preloadMap[rel.Name][value] = db.Statement.Preloads[name]
}
}
}
} else {
if _, ok := preloadMap[preloadFields[0]]; !ok {
preloadMap[preloadFields[0]] = map[string][]interface{}{}
}
if value := strings.TrimPrefix(strings.TrimPrefix(name, preloadFields[0]), "."); value != "" {
preloadMap[preloadFields[0]][value] = db.Statement.Preloads[name]
}
}
}
preloadNames := make([]string, 0, len(preloadMap))
for key := range preloadMap {
preloadNames = append(preloadNames, key)
}
sort.Strings(preloadNames)
for _, name := range preloadNames {
if rel := db.Statement.Schema.Relationships.Relations[name]; rel != nil {
preload(db, rel, db.Statement.Preloads[name], preloadMap[name])
} else {
db.AddError(fmt.Errorf("%v: %w for schema %v", name, gorm.ErrUnsupportedRelation, db.Statement.Schema.Name))
}
}
}
}
func AfterQuery(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil && !db.Statement.SkipHooks && db.Statement.Schema.AfterFind && db.RowsAffected > 0 {
callMethod(db, func(value interface{}, tx *gorm.DB) bool {
if i, ok := value.(AfterFindInterface); ok {
db.AddError(i.AfterFind(tx))
return true
}
return false
})
}
}

16
vendor/gorm.io/gorm/callbacks/raw.go generated vendored Normal file
View file

@ -0,0 +1,16 @@
package callbacks
import (
"gorm.io/gorm"
)
func RawExec(db *gorm.DB) {
if db.Error == nil && !db.DryRun {
result, err := db.Statement.ConnPool.ExecContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
if err != nil {
db.AddError(err)
} else {
db.RowsAffected, _ = result.RowsAffected()
}
}
}

21
vendor/gorm.io/gorm/callbacks/row.go generated vendored Normal file
View file

@ -0,0 +1,21 @@
package callbacks
import (
"gorm.io/gorm"
)
func RowQuery(db *gorm.DB) {
if db.Error == nil {
BuildQuerySQL(db)
if !db.DryRun {
if isRows, ok := db.InstanceGet("rows"); ok && isRows.(bool) {
db.Statement.Dest, db.Error = db.Statement.ConnPool.QueryContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
} else {
db.Statement.Dest = db.Statement.ConnPool.QueryRowContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
}
db.RowsAffected = -1
}
}
}

31
vendor/gorm.io/gorm/callbacks/transaction.go generated vendored Normal file
View file

@ -0,0 +1,31 @@
package callbacks
import (
"gorm.io/gorm"
)
func BeginTransaction(db *gorm.DB) {
if !db.Config.SkipDefaultTransaction {
if tx := db.Begin(); tx.Error == nil {
db.Statement.ConnPool = tx.Statement.ConnPool
db.InstanceSet("gorm:started_transaction", true)
} else if tx.Error == gorm.ErrInvalidTransaction {
tx.Error = nil
} else {
db.Error = tx.Error
}
}
}
func CommitOrRollbackTransaction(db *gorm.DB) {
if !db.Config.SkipDefaultTransaction {
if _, ok := db.InstanceGet("gorm:started_transaction"); ok {
if db.Error == nil {
db.Commit()
} else {
db.Rollback()
}
db.Statement.ConnPool = db.ConnPool
}
}
}

261
vendor/gorm.io/gorm/callbacks/update.go generated vendored Normal file
View file

@ -0,0 +1,261 @@
package callbacks
import (
"reflect"
"sort"
"gorm.io/gorm"
"gorm.io/gorm/clause"
"gorm.io/gorm/schema"
)
func SetupUpdateReflectValue(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil {
if !db.Statement.ReflectValue.CanAddr() || db.Statement.Model != db.Statement.Dest {
db.Statement.ReflectValue = reflect.ValueOf(db.Statement.Model)
for db.Statement.ReflectValue.Kind() == reflect.Ptr {
db.Statement.ReflectValue = db.Statement.ReflectValue.Elem()
}
if dest, ok := db.Statement.Dest.(map[string]interface{}); ok {
for _, rel := range db.Statement.Schema.Relationships.BelongsTo {
if _, ok := dest[rel.Name]; ok {
rel.Field.Set(db.Statement.ReflectValue, dest[rel.Name])
}
}
}
}
}
}
func BeforeUpdate(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil && !db.Statement.SkipHooks && (db.Statement.Schema.BeforeSave || db.Statement.Schema.BeforeUpdate) {
callMethod(db, func(value interface{}, tx *gorm.DB) (called bool) {
if db.Statement.Schema.BeforeSave {
if i, ok := value.(BeforeSaveInterface); ok {
called = true
db.AddError(i.BeforeSave(tx))
}
}
if db.Statement.Schema.BeforeUpdate {
if i, ok := value.(BeforeUpdateInterface); ok {
called = true
db.AddError(i.BeforeUpdate(tx))
}
}
return called
})
}
}
func Update(db *gorm.DB) {
if db.Error == nil {
if db.Statement.Schema != nil && !db.Statement.Unscoped {
for _, c := range db.Statement.Schema.UpdateClauses {
db.Statement.AddClause(c)
}
}
if db.Statement.SQL.String() == "" {
db.Statement.SQL.Grow(180)
db.Statement.AddClauseIfNotExists(clause.Update{})
if set := ConvertToAssignments(db.Statement); len(set) != 0 {
db.Statement.AddClause(set)
} else {
return
}
db.Statement.Build(db.Statement.BuildClauses...)
}
if _, ok := db.Statement.Clauses["WHERE"]; !db.AllowGlobalUpdate && !ok {
db.AddError(gorm.ErrMissingWhereClause)
return
}
if !db.DryRun && db.Error == nil {
result, err := db.Statement.ConnPool.ExecContext(db.Statement.Context, db.Statement.SQL.String(), db.Statement.Vars...)
if err == nil {
db.RowsAffected, _ = result.RowsAffected()
} else {
db.AddError(err)
}
}
}
}
func AfterUpdate(db *gorm.DB) {
if db.Error == nil && db.Statement.Schema != nil && !db.Statement.SkipHooks && (db.Statement.Schema.AfterSave || db.Statement.Schema.AfterUpdate) {
callMethod(db, func(value interface{}, tx *gorm.DB) (called bool) {
if db.Statement.Schema.AfterSave {
if i, ok := value.(AfterSaveInterface); ok {
called = true
db.AddError(i.AfterSave(tx))
}
}
if db.Statement.Schema.AfterUpdate {
if i, ok := value.(AfterUpdateInterface); ok {
called = true
db.AddError(i.AfterUpdate(tx))
}
}
return called
})
}
}
// ConvertToAssignments convert to update assignments
func ConvertToAssignments(stmt *gorm.Statement) (set clause.Set) {
var (
selectColumns, restricted = stmt.SelectAndOmitColumns(false, true)
assignValue func(field *schema.Field, value interface{})
)
switch stmt.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
assignValue = func(field *schema.Field, value interface{}) {
for i := 0; i < stmt.ReflectValue.Len(); i++ {
field.Set(stmt.ReflectValue.Index(i), value)
}
}
case reflect.Struct:
assignValue = func(field *schema.Field, value interface{}) {
if stmt.ReflectValue.CanAddr() {
field.Set(stmt.ReflectValue, value)
}
}
default:
assignValue = func(field *schema.Field, value interface{}) {
}
}
updatingValue := reflect.ValueOf(stmt.Dest)
for updatingValue.Kind() == reflect.Ptr {
updatingValue = updatingValue.Elem()
}
if !updatingValue.CanAddr() || stmt.Dest != stmt.Model {
switch stmt.ReflectValue.Kind() {
case reflect.Slice, reflect.Array:
var primaryKeyExprs []clause.Expression
for i := 0; i < stmt.ReflectValue.Len(); i++ {
var exprs = make([]clause.Expression, len(stmt.Schema.PrimaryFields))
var notZero bool
for idx, field := range stmt.Schema.PrimaryFields {
value, isZero := field.ValueOf(stmt.ReflectValue.Index(i))
exprs[idx] = clause.Eq{Column: field.DBName, Value: value}
notZero = notZero || !isZero
}
if notZero {
primaryKeyExprs = append(primaryKeyExprs, clause.And(exprs...))
}
}
stmt.AddClause(clause.Where{Exprs: []clause.Expression{clause.Or(primaryKeyExprs...)}})
case reflect.Struct:
for _, field := range stmt.Schema.PrimaryFields {
if value, isZero := field.ValueOf(stmt.ReflectValue); !isZero {
stmt.AddClause(clause.Where{Exprs: []clause.Expression{clause.Eq{Column: field.DBName, Value: value}}})
}
}
}
}
switch value := updatingValue.Interface().(type) {
case map[string]interface{}:
set = make([]clause.Assignment, 0, len(value))
keys := make([]string, 0, len(value))
for k := range value {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
kv := value[k]
if _, ok := kv.(*gorm.DB); ok {
kv = []interface{}{kv}
}
if stmt.Schema != nil {
if field := stmt.Schema.LookUpField(k); field != nil {
if field.DBName != "" {
if v, ok := selectColumns[field.DBName]; (ok && v) || (!ok && !restricted) {
set = append(set, clause.Assignment{Column: clause.Column{Name: field.DBName}, Value: kv})
assignValue(field, value[k])
}
} else if v, ok := selectColumns[field.Name]; (ok && v) || (!ok && !restricted) {
assignValue(field, value[k])
}
continue
}
}
if v, ok := selectColumns[k]; (ok && v) || (!ok && !restricted) {
set = append(set, clause.Assignment{Column: clause.Column{Name: k}, Value: kv})
}
}
if !stmt.SkipHooks && stmt.Schema != nil {
for _, dbName := range stmt.Schema.DBNames {
field := stmt.Schema.LookUpField(dbName)
if field.AutoUpdateTime > 0 && value[field.Name] == nil && value[field.DBName] == nil {
if v, ok := selectColumns[field.DBName]; (ok && v) || !ok {
now := stmt.DB.NowFunc()
assignValue(field, now)
if field.AutoUpdateTime == schema.UnixNanosecond {
set = append(set, clause.Assignment{Column: clause.Column{Name: field.DBName}, Value: now.UnixNano()})
} else if field.AutoUpdateTime == schema.UnixMillisecond {
set = append(set, clause.Assignment{Column: clause.Column{Name: field.DBName}, Value: now.UnixNano() / 1e6})
} else if field.GORMDataType == schema.Time {
set = append(set, clause.Assignment{Column: clause.Column{Name: field.DBName}, Value: now})
} else {
set = append(set, clause.Assignment{Column: clause.Column{Name: field.DBName}, Value: now.Unix()})
}
}
}
}
}
default:
switch updatingValue.Kind() {
case reflect.Struct:
set = make([]clause.Assignment, 0, len(stmt.Schema.FieldsByDBName))
for _, dbName := range stmt.Schema.DBNames {
field := stmt.Schema.LookUpField(dbName)
if !field.PrimaryKey || (!updatingValue.CanAddr() || stmt.Dest != stmt.Model) {
if v, ok := selectColumns[field.DBName]; (ok && v) || (!ok && (!restricted || (!stmt.SkipHooks && field.AutoUpdateTime > 0))) {
value, isZero := field.ValueOf(updatingValue)
if !stmt.SkipHooks && field.AutoUpdateTime > 0 {
if field.AutoUpdateTime == schema.UnixNanosecond {
value = stmt.DB.NowFunc().UnixNano()
} else if field.AutoUpdateTime == schema.UnixMillisecond {
value = stmt.DB.NowFunc().UnixNano() / 1e6
} else if field.GORMDataType == schema.Time {
value = stmt.DB.NowFunc()
} else {
value = stmt.DB.NowFunc().Unix()
}
isZero = false
}
if ok || !isZero {
set = append(set, clause.Assignment{Column: clause.Column{Name: field.DBName}, Value: value})
assignValue(field, value)
}
}
} else {
if value, isZero := field.ValueOf(updatingValue); !isZero {
stmt.AddClause(clause.Where{Exprs: []clause.Expression{clause.Eq{Column: field.DBName, Value: value}}})
}
}
}
default:
stmt.AddError(gorm.ErrInvalidData)
}
}
return
}

293
vendor/gorm.io/gorm/chainable_api.go generated vendored Normal file
View file

@ -0,0 +1,293 @@
package gorm
import (
"fmt"
"regexp"
"strings"
"gorm.io/gorm/clause"
"gorm.io/gorm/utils"
)
// Model specify the model you would like to run db operations
// // update all users's name to `hello`
// db.Model(&User{}).Update("name", "hello")
// // if user's primary key is non-blank, will use it as condition, then will only update the user's name to `hello`
// db.Model(&user).Update("name", "hello")
func (db *DB) Model(value interface{}) (tx *DB) {
tx = db.getInstance()
tx.Statement.Model = value
return
}
// Clauses Add clauses
func (db *DB) Clauses(conds ...clause.Expression) (tx *DB) {
tx = db.getInstance()
var whereConds []interface{}
for _, cond := range conds {
if c, ok := cond.(clause.Interface); ok {
tx.Statement.AddClause(c)
} else if optimizer, ok := cond.(StatementModifier); ok {
optimizer.ModifyStatement(tx.Statement)
} else {
whereConds = append(whereConds, cond)
}
}
if len(whereConds) > 0 {
tx.Statement.AddClause(clause.Where{Exprs: tx.Statement.BuildCondition(whereConds[0], whereConds[1:]...)})
}
return
}
var tableRegexp = regexp.MustCompile(`(?i).+? AS (\w+)\s*(?:$|,)`)
// Table specify the table you would like to run db operations
func (db *DB) Table(name string, args ...interface{}) (tx *DB) {
tx = db.getInstance()
if strings.Contains(name, " ") || strings.Contains(name, "`") || len(args) > 0 {
tx.Statement.TableExpr = &clause.Expr{SQL: name, Vars: args}
if results := tableRegexp.FindStringSubmatch(name); len(results) == 2 {
tx.Statement.Table = results[1]
return
}
} else if tables := strings.Split(name, "."); len(tables) == 2 {
tx.Statement.TableExpr = &clause.Expr{SQL: tx.Statement.Quote(name)}
tx.Statement.Table = tables[1]
return
}
tx.Statement.Table = name
return
}
// Distinct specify distinct fields that you want querying
func (db *DB) Distinct(args ...interface{}) (tx *DB) {
tx = db.getInstance()
tx.Statement.Distinct = true
if len(args) > 0 {
tx = tx.Select(args[0], args[1:]...)
}
return
}
// Select specify fields that you want when querying, creating, updating
func (db *DB) Select(query interface{}, args ...interface{}) (tx *DB) {
tx = db.getInstance()
switch v := query.(type) {
case []string:
tx.Statement.Selects = v
for _, arg := range args {
switch arg := arg.(type) {
case string:
tx.Statement.Selects = append(tx.Statement.Selects, arg)
case []string:
tx.Statement.Selects = append(tx.Statement.Selects, arg...)
default:
tx.AddError(fmt.Errorf("unsupported select args %v %v", query, args))
return
}
}
delete(tx.Statement.Clauses, "SELECT")
case string:
if strings.Count(v, "?") >= len(args) && len(args) > 0 {
tx.Statement.AddClause(clause.Select{
Distinct: db.Statement.Distinct,
Expression: clause.Expr{SQL: v, Vars: args},
})
} else if strings.Count(v, "@") > 0 && len(args) > 0 {
tx.Statement.AddClause(clause.Select{
Distinct: db.Statement.Distinct,
Expression: clause.NamedExpr{SQL: v, Vars: args},
})
} else {
tx.Statement.Selects = []string{v}
for _, arg := range args {
switch arg := arg.(type) {
case string:
tx.Statement.Selects = append(tx.Statement.Selects, arg)
case []string:
tx.Statement.Selects = append(tx.Statement.Selects, arg...)
default:
tx.Statement.AddClause(clause.Select{
Distinct: db.Statement.Distinct,
Expression: clause.Expr{SQL: v, Vars: args},
})
return
}
}
delete(tx.Statement.Clauses, "SELECT")
}
default:
tx.AddError(fmt.Errorf("unsupported select args %v %v", query, args))
}
return
}
// Omit specify fields that you want to ignore when creating, updating and querying
func (db *DB) Omit(columns ...string) (tx *DB) {
tx = db.getInstance()
if len(columns) == 1 && strings.ContainsRune(columns[0], ',') {
tx.Statement.Omits = strings.FieldsFunc(columns[0], utils.IsValidDBNameChar)
} else {
tx.Statement.Omits = columns
}
return
}
// Where add conditions
func (db *DB) Where(query interface{}, args ...interface{}) (tx *DB) {
tx = db.getInstance()
if conds := tx.Statement.BuildCondition(query, args...); len(conds) > 0 {
tx.Statement.AddClause(clause.Where{Exprs: conds})
}
return
}
// Not add NOT conditions
func (db *DB) Not(query interface{}, args ...interface{}) (tx *DB) {
tx = db.getInstance()
if conds := tx.Statement.BuildCondition(query, args...); len(conds) > 0 {
tx.Statement.AddClause(clause.Where{Exprs: []clause.Expression{clause.Not(conds...)}})
}
return
}
// Or add OR conditions
func (db *DB) Or(query interface{}, args ...interface{}) (tx *DB) {
tx = db.getInstance()
if conds := tx.Statement.BuildCondition(query, args...); len(conds) > 0 {
tx.Statement.AddClause(clause.Where{Exprs: []clause.Expression{clause.Or(clause.And(conds...))}})
}
return
}
// Joins specify Joins conditions
// db.Joins("Account").Find(&user)
// db.Joins("JOIN emails ON emails.user_id = users.id AND emails.email = ?", "jinzhu@example.org").Find(&user)
func (db *DB) Joins(query string, args ...interface{}) (tx *DB) {
tx = db.getInstance()
tx.Statement.Joins = append(tx.Statement.Joins, join{Name: query, Conds: args})
return
}
// Group specify the group method on the find
func (db *DB) Group(name string) (tx *DB) {
tx = db.getInstance()
fields := strings.FieldsFunc(name, utils.IsValidDBNameChar)
tx.Statement.AddClause(clause.GroupBy{
Columns: []clause.Column{{Name: name, Raw: len(fields) != 1}},
})
return
}
// Having specify HAVING conditions for GROUP BY
func (db *DB) Having(query interface{}, args ...interface{}) (tx *DB) {
tx = db.getInstance()
tx.Statement.AddClause(clause.GroupBy{
Having: tx.Statement.BuildCondition(query, args...),
})
return
}
// Order specify order when retrieve records from database
// db.Order("name DESC")
// db.Order(clause.OrderByColumn{Column: clause.Column{Name: "name"}, Desc: true})
func (db *DB) Order(value interface{}) (tx *DB) {
tx = db.getInstance()
switch v := value.(type) {
case clause.OrderByColumn:
tx.Statement.AddClause(clause.OrderBy{
Columns: []clause.OrderByColumn{v},
})
default:
tx.Statement.AddClause(clause.OrderBy{
Columns: []clause.OrderByColumn{{
Column: clause.Column{Name: fmt.Sprint(value), Raw: true},
}},
})
}
return
}
// Limit specify the number of records to be retrieved
func (db *DB) Limit(limit int) (tx *DB) {
tx = db.getInstance()
tx.Statement.AddClause(clause.Limit{Limit: limit})
return
}
// Offset specify the number of records to skip before starting to return the records
func (db *DB) Offset(offset int) (tx *DB) {
tx = db.getInstance()
tx.Statement.AddClause(clause.Limit{Offset: offset})
return
}
// Scopes pass current database connection to arguments `func(DB) DB`, which could be used to add conditions dynamically
// func AmountGreaterThan1000(db *gorm.DB) *gorm.DB {
// return db.Where("amount > ?", 1000)
// }
//
// func OrderStatus(status []string) func (db *gorm.DB) *gorm.DB {
// return func (db *gorm.DB) *gorm.DB {
// return db.Scopes(AmountGreaterThan1000).Where("status in (?)", status)
// }
// }
//
// db.Scopes(AmountGreaterThan1000, OrderStatus([]string{"paid", "shipped"})).Find(&orders)
func (db *DB) Scopes(funcs ...func(*DB) *DB) (tx *DB) {
tx = db.getInstance()
tx.Statement.scopes = append(tx.Statement.scopes, funcs...)
return tx
}
// Preload preload associations with given conditions
// db.Preload("Orders", "state NOT IN (?)", "cancelled").Find(&users)
func (db *DB) Preload(query string, args ...interface{}) (tx *DB) {
tx = db.getInstance()
if tx.Statement.Preloads == nil {
tx.Statement.Preloads = map[string][]interface{}{}
}
tx.Statement.Preloads[query] = args
return
}
func (db *DB) Attrs(attrs ...interface{}) (tx *DB) {
tx = db.getInstance()
tx.Statement.attrs = attrs
return
}
func (db *DB) Assign(attrs ...interface{}) (tx *DB) {
tx = db.getInstance()
tx.Statement.assigns = attrs
return
}
func (db *DB) Unscoped() (tx *DB) {
tx = db.getInstance()
tx.Statement.Unscoped = true
return
}
func (db *DB) Raw(sql string, values ...interface{}) (tx *DB) {
tx = db.getInstance()
tx.Statement.SQL = strings.Builder{}
if strings.Contains(sql, "@") {
clause.NamedExpr{SQL: sql, Vars: values}.Build(tx.Statement)
} else {
clause.Expr{SQL: sql, Vars: values}.Build(tx.Statement)
}
return
}

88
vendor/gorm.io/gorm/clause/clause.go generated vendored Normal file
View file

@ -0,0 +1,88 @@
package clause
// Interface clause interface
type Interface interface {
Name() string
Build(Builder)
MergeClause(*Clause)
}
// ClauseBuilder clause builder, allows to customize how to build clause
type ClauseBuilder func(Clause, Builder)
type Writer interface {
WriteByte(byte) error
WriteString(string) (int, error)
}
// Builder builder interface
type Builder interface {
Writer
WriteQuoted(field interface{})
AddVar(Writer, ...interface{})
}
// Clause
type Clause struct {
Name string // WHERE
BeforeExpression Expression
AfterNameExpression Expression
AfterExpression Expression
Expression Expression
Builder ClauseBuilder
}
// Build build clause
func (c Clause) Build(builder Builder) {
if c.Builder != nil {
c.Builder(c, builder)
} else if c.Expression != nil {
if c.BeforeExpression != nil {
c.BeforeExpression.Build(builder)
builder.WriteByte(' ')
}
if c.Name != "" {
builder.WriteString(c.Name)
builder.WriteByte(' ')
}
if c.AfterNameExpression != nil {
c.AfterNameExpression.Build(builder)
builder.WriteByte(' ')
}
c.Expression.Build(builder)
if c.AfterExpression != nil {
builder.WriteByte(' ')
c.AfterExpression.Build(builder)
}
}
}
const (
PrimaryKey string = "~~~py~~~" // primary key
CurrentTable string = "~~~ct~~~" // current table
Associations string = "~~~as~~~" // associations
)
var (
currentTable = Table{Name: CurrentTable}
PrimaryColumn = Column{Table: CurrentTable, Name: PrimaryKey}
)
// Column quote with name
type Column struct {
Table string
Name string
Alias string
Raw bool
}
// Table quote with name
type Table struct {
Name string
Alias string
Raw bool
}

23
vendor/gorm.io/gorm/clause/delete.go generated vendored Normal file
View file

@ -0,0 +1,23 @@
package clause
type Delete struct {
Modifier string
}
func (d Delete) Name() string {
return "DELETE"
}
func (d Delete) Build(builder Builder) {
builder.WriteString("DELETE")
if d.Modifier != "" {
builder.WriteByte(' ')
builder.WriteString(d.Modifier)
}
}
func (d Delete) MergeClause(clause *Clause) {
clause.Name = ""
clause.Expression = d
}

344
vendor/gorm.io/gorm/clause/expression.go generated vendored Normal file
View file

@ -0,0 +1,344 @@
package clause
import (
"database/sql"
"database/sql/driver"
"go/ast"
"reflect"
)
// Expression expression interface
type Expression interface {
Build(builder Builder)
}
// NegationExpressionBuilder negation expression builder
type NegationExpressionBuilder interface {
NegationBuild(builder Builder)
}
// Expr raw expression
type Expr struct {
SQL string
Vars []interface{}
WithoutParentheses bool
}
// Build build raw expression
func (expr Expr) Build(builder Builder) {
var (
afterParenthesis bool
idx int
)
for _, v := range []byte(expr.SQL) {
if v == '?' && len(expr.Vars) > idx {
if afterParenthesis || expr.WithoutParentheses {
if _, ok := expr.Vars[idx].(driver.Valuer); ok {
builder.AddVar(builder, expr.Vars[idx])
} else {
switch rv := reflect.ValueOf(expr.Vars[idx]); rv.Kind() {
case reflect.Slice, reflect.Array:
if rv.Len() == 0 {
builder.AddVar(builder, nil)
} else {
for i := 0; i < rv.Len(); i++ {
if i > 0 {
builder.WriteByte(',')
}
builder.AddVar(builder, rv.Index(i).Interface())
}
}
default:
builder.AddVar(builder, expr.Vars[idx])
}
}
} else {
builder.AddVar(builder, expr.Vars[idx])
}
idx++
} else {
if v == '(' {
afterParenthesis = true
} else {
afterParenthesis = false
}
builder.WriteByte(v)
}
}
}
// NamedExpr raw expression for named expr
type NamedExpr struct {
SQL string
Vars []interface{}
}
// Build build raw expression
func (expr NamedExpr) Build(builder Builder) {
var (
idx int
inName bool
afterParenthesis bool
namedMap = make(map[string]interface{}, len(expr.Vars))
)
for _, v := range expr.Vars {
switch value := v.(type) {
case sql.NamedArg:
namedMap[value.Name] = value.Value
case map[string]interface{}:
for k, v := range value {
namedMap[k] = v
}
default:
var appendFieldsToMap func(reflect.Value)
appendFieldsToMap = func(reflectValue reflect.Value) {
reflectValue = reflect.Indirect(reflectValue)
switch reflectValue.Kind() {
case reflect.Struct:
modelType := reflectValue.Type()
for i := 0; i < modelType.NumField(); i++ {
if fieldStruct := modelType.Field(i); ast.IsExported(fieldStruct.Name) {
namedMap[fieldStruct.Name] = reflectValue.Field(i).Interface()
if fieldStruct.Anonymous {
appendFieldsToMap(reflectValue.Field(i))
}
}
}
}
}
appendFieldsToMap(reflect.ValueOf(value))
}
}
name := make([]byte, 0, 10)
for _, v := range []byte(expr.SQL) {
if v == '@' && !inName {
inName = true
name = []byte{}
} else if v == ' ' || v == ',' || v == ')' || v == '"' || v == '\'' || v == '`' || v == '\n' {
if inName {
if nv, ok := namedMap[string(name)]; ok {
builder.AddVar(builder, nv)
} else {
builder.WriteByte('@')
builder.WriteString(string(name))
}
inName = false
}
afterParenthesis = false
builder.WriteByte(v)
} else if v == '?' && len(expr.Vars) > idx {
if afterParenthesis {
if _, ok := expr.Vars[idx].(driver.Valuer); ok {
builder.AddVar(builder, expr.Vars[idx])
} else {
switch rv := reflect.ValueOf(expr.Vars[idx]); rv.Kind() {
case reflect.Slice, reflect.Array:
if rv.Len() == 0 {
builder.AddVar(builder, nil)
} else {
for i := 0; i < rv.Len(); i++ {
if i > 0 {
builder.WriteByte(',')
}
builder.AddVar(builder, rv.Index(i).Interface())
}
}
default:
builder.AddVar(builder, expr.Vars[idx])
}
}
} else {
builder.AddVar(builder, expr.Vars[idx])
}
idx++
} else if inName {
name = append(name, v)
} else {
if v == '(' {
afterParenthesis = true
} else {
afterParenthesis = false
}
builder.WriteByte(v)
}
}
if inName {
builder.AddVar(builder, namedMap[string(name)])
}
}
// IN Whether a value is within a set of values
type IN struct {
Column interface{}
Values []interface{}
}
func (in IN) Build(builder Builder) {
builder.WriteQuoted(in.Column)
switch len(in.Values) {
case 0:
builder.WriteString(" IN (NULL)")
case 1:
if _, ok := in.Values[0].([]interface{}); !ok {
builder.WriteString(" = ")
builder.AddVar(builder, in.Values[0])
break
}
fallthrough
default:
builder.WriteString(" IN (")
builder.AddVar(builder, in.Values...)
builder.WriteByte(')')
}
}
func (in IN) NegationBuild(builder Builder) {
switch len(in.Values) {
case 0:
case 1:
if _, ok := in.Values[0].([]interface{}); !ok {
builder.WriteQuoted(in.Column)
builder.WriteString(" <> ")
builder.AddVar(builder, in.Values[0])
break
}
fallthrough
default:
builder.WriteQuoted(in.Column)
builder.WriteString(" NOT IN (")
builder.AddVar(builder, in.Values...)
builder.WriteByte(')')
}
}
// Eq equal to for where
type Eq struct {
Column interface{}
Value interface{}
}
func (eq Eq) Build(builder Builder) {
builder.WriteQuoted(eq.Column)
if eqNil(eq.Value) {
builder.WriteString(" IS NULL")
} else {
builder.WriteString(" = ")
builder.AddVar(builder, eq.Value)
}
}
func (eq Eq) NegationBuild(builder Builder) {
Neq(eq).Build(builder)
}
// Neq not equal to for where
type Neq Eq
func (neq Neq) Build(builder Builder) {
builder.WriteQuoted(neq.Column)
if eqNil(neq.Value) {
builder.WriteString(" IS NOT NULL")
} else {
builder.WriteString(" <> ")
builder.AddVar(builder, neq.Value)
}
}
func (neq Neq) NegationBuild(builder Builder) {
Eq(neq).Build(builder)
}
// Gt greater than for where
type Gt Eq
func (gt Gt) Build(builder Builder) {
builder.WriteQuoted(gt.Column)
builder.WriteString(" > ")
builder.AddVar(builder, gt.Value)
}
func (gt Gt) NegationBuild(builder Builder) {
Lte(gt).Build(builder)
}
// Gte greater than or equal to for where
type Gte Eq
func (gte Gte) Build(builder Builder) {
builder.WriteQuoted(gte.Column)
builder.WriteString(" >= ")
builder.AddVar(builder, gte.Value)
}
func (gte Gte) NegationBuild(builder Builder) {
Lt(gte).Build(builder)
}
// Lt less than for where
type Lt Eq
func (lt Lt) Build(builder Builder) {
builder.WriteQuoted(lt.Column)
builder.WriteString(" < ")
builder.AddVar(builder, lt.Value)
}
func (lt Lt) NegationBuild(builder Builder) {
Gte(lt).Build(builder)
}
// Lte less than or equal to for where
type Lte Eq
func (lte Lte) Build(builder Builder) {
builder.WriteQuoted(lte.Column)
builder.WriteString(" <= ")
builder.AddVar(builder, lte.Value)
}
func (lte Lte) NegationBuild(builder Builder) {
Gt(lte).Build(builder)
}
// Like whether string matches regular expression
type Like Eq
func (like Like) Build(builder Builder) {
builder.WriteQuoted(like.Column)
builder.WriteString(" LIKE ")
builder.AddVar(builder, like.Value)
}
func (like Like) NegationBuild(builder Builder) {
builder.WriteQuoted(like.Column)
builder.WriteString(" NOT LIKE ")
builder.AddVar(builder, like.Value)
}
func eqNil(value interface{}) bool {
if valuer, ok := value.(driver.Valuer); ok {
value, _ = valuer.Value()
}
return value == nil || eqNilReflect(value)
}
func eqNilReflect(value interface{}) bool {
reflectValue := reflect.ValueOf(value)
return reflectValue.Kind() == reflect.Ptr && reflectValue.IsNil()
}

37
vendor/gorm.io/gorm/clause/from.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package clause
// From from clause
type From struct {
Tables []Table
Joins []Join
}
// Name from clause name
func (from From) Name() string {
return "FROM"
}
// Build build from clause
func (from From) Build(builder Builder) {
if len(from.Tables) > 0 {
for idx, table := range from.Tables {
if idx > 0 {
builder.WriteByte(',')
}
builder.WriteQuoted(table)
}
} else {
builder.WriteQuoted(currentTable)
}
for _, join := range from.Joins {
builder.WriteByte(' ')
join.Build(builder)
}
}
// MergeClause merge from clause
func (from From) MergeClause(clause *Clause) {
clause.Expression = from
}

48
vendor/gorm.io/gorm/clause/group_by.go generated vendored Normal file
View file

@ -0,0 +1,48 @@
package clause
// GroupBy group by clause
type GroupBy struct {
Columns []Column
Having []Expression
}
// Name from clause name
func (groupBy GroupBy) Name() string {
return "GROUP BY"
}
// Build build group by clause
func (groupBy GroupBy) Build(builder Builder) {
for idx, column := range groupBy.Columns {
if idx > 0 {
builder.WriteByte(',')
}
builder.WriteQuoted(column)
}
if len(groupBy.Having) > 0 {
builder.WriteString(" HAVING ")
Where{Exprs: groupBy.Having}.Build(builder)
}
}
// MergeClause merge group by clause
func (groupBy GroupBy) MergeClause(clause *Clause) {
if v, ok := clause.Expression.(GroupBy); ok {
copiedColumns := make([]Column, len(v.Columns))
copy(copiedColumns, v.Columns)
groupBy.Columns = append(copiedColumns, groupBy.Columns...)
copiedHaving := make([]Expression, len(v.Having))
copy(copiedHaving, v.Having)
groupBy.Having = append(copiedHaving, groupBy.Having...)
}
clause.Expression = groupBy
if len(groupBy.Columns) == 0 {
clause.Name = ""
} else {
clause.Name = groupBy.Name()
}
}

Some files were not shown because too many files have changed in this diff Show more