[pve-devel] [PATCH docs] change http links to https

Oguz Bektas o.bektas at proxmox.com
Thu Apr 29 13:59:42 CEST 2021


checked if they work -- some returned certificate errors so didn't
change those ones.

also updated some that didn't point to the right thing (open-iscsi, and
the list of supported CPUs was returning empty).

Signed-off-by: Oguz Bektas <o.bektas at proxmox.com>
---
 pmxcfs.adoc                     | 6 +++---
 pve-copyright.adoc              | 2 +-
 pve-external-metric-server.adoc | 2 +-
 pve-faq.adoc                    | 9 +++++----
 pve-firewall.adoc               | 2 +-
 pve-installation.adoc           | 4 ++--
 pve-intro.adoc                  | 6 +++---
 pve-storage-cephfs.adoc         | 2 +-
 pve-storage-iscsi.adoc          | 4 ++--
 pve-storage-rbd.adoc            | 2 +-
 pveum.adoc                      | 2 +-
 qm-cloud-init.adoc              | 2 +-
 qm.adoc                         | 6 +++---
 13 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/pmxcfs.adoc b/pmxcfs.adoc
index 7b9cfac..d4579a7 100644
--- a/pmxcfs.adoc
+++ b/pmxcfs.adoc
@@ -78,10 +78,10 @@ are only accessible by root.
 Technology
 ----------
 
-We use the http://www.corosync.org[Corosync Cluster Engine] for
-cluster communication, and http://www.sqlite.org[SQlite] for the
+We use the https://www.corosync.org[Corosync Cluster Engine] for
+cluster communication, and https://www.sqlite.org[SQlite] for the
 database file. The file system is implemented in user space using
-http://fuse.sourceforge.net[FUSE].
+https://github.com/libfuse/libfuse[FUSE].
 
 File System Layout
 ------------------
diff --git a/pve-copyright.adoc b/pve-copyright.adoc
index 15f1d4f..44d4039 100644
--- a/pve-copyright.adoc
+++ b/pve-copyright.adoc
@@ -15,4 +15,4 @@ Affero General Public License for more details.
 
 You should have received a copy of the GNU Affero General Public
 License along with this program.  If not, see
-http://www.gnu.org/licenses/
+https://www.gnu.org/licenses/
diff --git a/pve-external-metric-server.adoc b/pve-external-metric-server.adoc
index eace999..641fc42 100644
--- a/pve-external-metric-server.adoc
+++ b/pve-external-metric-server.adoc
@@ -12,7 +12,7 @@ receive various stats about your hosts, virtual guests and storages.
 
 Currently supported are:
 
- * Graphite (see http://graphiteapp.org )
+ * Graphite (see https://graphiteapp.org )
  * InfluxDB (see https://www.influxdata.com/time-series-platform/influxdb/ )
 
 The external metric server definitions are saved in '/etc/pve/status.cfg', and
diff --git a/pve-faq.adoc b/pve-faq.adoc
index 9d1d708..4e8f9f9 100644
--- a/pve-faq.adoc
+++ b/pve-faq.adoc
@@ -17,7 +17,7 @@ ADD NEW FAQS TO THE BOTTOM OF THIS SECTION TO MAINTAIN NUMBERING
 
 What distribution is {pve} based on?::
 
-{pve} is based on http://www.debian.org[Debian GNU/Linux]
+{pve} is based on https://www.debian.org[Debian GNU/Linux]
 
 What license does the {pve} project use?::
 
@@ -43,13 +43,14 @@ egrep '(vmx|svm)' /proc/cpuinfo
 Supported Intel CPUs::
 
 64-bit processors with
-http://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29[Intel
-Virtualization Technology (Intel VT-x)] support. (http://ark.intel.com/search/advanced/?s=t&VTX=true&InstructionSet=64-bit[List of processors with Intel VT and 64-bit])
+https://en.wikipedia.org/wiki/Virtualization_Technology#Intel_virtualization_.28VT-x.29[Intel
+Virtualization Technology (Intel VT-x)] support.
+(https://ark.intel.com/content/www/us/en/ark/search/featurefilter.html?productType=873&2_VTX=True&2_InstructionSet=64-bit[List of processors with Intel VT and 64-bit])
 
 Supported AMD CPUs::
 
 64-bit processors with
-http://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29[AMD
+https://en.wikipedia.org/wiki/Virtualization_Technology#AMD_virtualization_.28AMD-V.29[AMD
 Virtualization Technology (AMD-V)] support.
 
 What is a container/virtual environment (VE)/virtual private server (VPS)?::
diff --git a/pve-firewall.adoc b/pve-firewall.adoc
index faf580c..f59c302 100644
--- a/pve-firewall.adoc
+++ b/pve-firewall.adoc
@@ -562,7 +562,7 @@ and add `ip_conntrack_ftp` to `/etc/modules` (so that it works after a reboot).
 Suricata IPS integration
 ~~~~~~~~~~~~~~~~~~~~~~~~
 
-If you want to use the http://suricata-ids.org/[Suricata IPS]
+If you want to use the https://suricata-ids.org/[Suricata IPS]
 (Intrusion Prevention System), it's possible.
 
 Packets will be forwarded to the IPS only after the firewall ACCEPTed
diff --git a/pve-installation.adoc b/pve-installation.adoc
index 7709cb0..dff6a4a 100644
--- a/pve-installation.adoc
+++ b/pve-installation.adoc
@@ -304,10 +304,10 @@ Video Tutorials
 ---------------
 
 * List of all official tutorials on our
-  http://www.youtube.com/proxmoxve[{pve} YouTube Channel]
+  https://www.youtube.com/proxmoxve[{pve} YouTube Channel]
 
 * Tutorials in Spanish language on
-  http://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z[ITexperts.es
+  https://www.youtube.com/playlist?list=PLUULBIhA5QDBdNf1pcTZ5UXhek63Fij8z[ITexperts.es
   YouTube Play List]
 
 
diff --git a/pve-intro.adoc b/pve-intro.adoc
index e7520d3..86dd651 100644
--- a/pve-intro.adoc
+++ b/pve-intro.adoc
@@ -169,7 +169,7 @@ Why Open Source
 
 {pve} uses a Linux kernel and is based on the Debian GNU/Linux
 Distribution. The source code of {pve} is released under the
-http://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
+https://www.gnu.org/licenses/agpl-3.0.html[GNU Affero General Public
 License, version 3]. This means that you are free to inspect the
 source code at any time or contribute to the project yourself.
 
@@ -209,7 +209,7 @@ machines. The clustering features were limited, and the user interface
 was simple (server generated web page).
 
 But we quickly developed new features using the
-http://corosync.github.io/corosync/[Corosync] cluster stack, and the
+https://corosync.github.io/corosync/[Corosync] cluster stack, and the
 introduction of the new Proxmox cluster file system (pmxcfs) was a big
 step forward, because it completely hides the cluster complexity from
 the user. Managing a cluster of 16 nodes is as simple as managing a
@@ -229,7 +229,7 @@ to manage your VMs.
 The support for various storage types is another big task. Notably,
 {pve} was the first distribution to ship ZFS on Linux by default in
 2014. Another milestone was the ability to run and manage
-http://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
+https://ceph.com/[Ceph] storage on the hypervisor nodes. Such setups
 are extremely cost effective.
 
 When we started we were among the first companies providing
diff --git a/pve-storage-cephfs.adoc b/pve-storage-cephfs.adoc
index b5d99db..c67f089 100644
--- a/pve-storage-cephfs.adoc
+++ b/pve-storage-cephfs.adoc
@@ -8,7 +8,7 @@ endif::wiki[]
 
 Storage pool type: `cephfs`
 
-CephFS implements a POSIX-compliant filesystem, using a http://ceph.com[Ceph]
+CephFS implements a POSIX-compliant filesystem, using a https://ceph.com[Ceph]
 storage cluster to store its data. As CephFS builds upon Ceph, it shares most of
 its properties. This includes redundancy, scalability, self-healing, and high
 availability.
diff --git a/pve-storage-iscsi.adoc b/pve-storage-iscsi.adoc
index 93975f5..55e5b21 100644
--- a/pve-storage-iscsi.adoc
+++ b/pve-storage-iscsi.adoc
@@ -11,11 +11,11 @@ Storage pool type: `iscsi`
 iSCSI is a widely employed technology used to connect to storage
 servers. Almost all storage vendors support iSCSI. There are also open
 source iSCSI target solutions available,
-e.g. http://www.openmediavault.org/[OpenMediaVault], which is based on
+e.g. https://www.openmediavault.org/[OpenMediaVault], which is based on
 Debian.
 
 To use this backend, you need to install the
-http://www.open-iscsi.org/[Open-iSCSI] (`open-iscsi`) package. This is a
+https://www.open-iscsi.com/[Open-iSCSI] (`open-iscsi`) package. This is a
 standard Debian package, but it is not installed by default to save
 resources.
 
diff --git a/pve-storage-rbd.adoc b/pve-storage-rbd.adoc
index aa870ed..bbc80e2 100644
--- a/pve-storage-rbd.adoc
+++ b/pve-storage-rbd.adoc
@@ -8,7 +8,7 @@ endif::wiki[]
 
 Storage pool type: `rbd`
 
-http://ceph.com[Ceph] is a distributed object store and file system
+https://ceph.com[Ceph] is a distributed object store and file system
 designed to provide excellent performance, reliability and
 scalability. RADOS block devices implement a feature rich block level
 storage, and you get the following advantages:
diff --git a/pveum.adoc b/pveum.adoc
index 0cebe82..be8f7f3 100644
--- a/pveum.adoc
+++ b/pveum.adoc
@@ -536,7 +536,7 @@ What permission do I need?
 ~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 The required API permissions are documented for each individual
-method, and can be found at http://pve.proxmox.com/pve-docs/api-viewer/
+method, and can be found at https://pve.proxmox.com/pve-docs/api-viewer/
 
 The permissions are specified as a list which can be interpreted as a
 tree of logic and access-check functions:
diff --git a/qm-cloud-init.adoc b/qm-cloud-init.adoc
index 12253be..1cebf14 100644
--- a/qm-cloud-init.adoc
+++ b/qm-cloud-init.adoc
@@ -5,7 +5,7 @@ ifdef::wiki[]
 :pve-toplevel:
 endif::wiki[]
 
-http://cloudinit.readthedocs.io[Cloud-Init] is the de facto
+https://cloudinit.readthedocs.io[Cloud-Init] is the de facto
 multi-distribution package that handles early initialization of a
 virtual machine instance. Using Cloud-Init, configuration of network
 devices and ssh keys on the hypervisor side is possible. When the VM
diff --git a/qm.adoc b/qm.adoc
index f42e760..ba303fd 100644
--- a/qm.adoc
+++ b/qm.adoc
@@ -85,7 +85,7 @@ versus an emulated IDE controller will double the sequential write throughput,
 as measured with `bonnie++(8)`. Using the virtio network interface can deliver
 up to three times the throughput of an emulated Intel E1000 network card, as
 measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki
-http://www.linux-kvm.org/page/Using_VirtIO_NIC]
+https://www.linux-kvm.org/page/Using_VirtIO_NIC]
 
 
 [[qm_virtual_machines_settings]]
@@ -735,8 +735,8 @@ standard setups.
 
 There are, however, some scenarios in which a BIOS is not a good firmware
 to boot from, e.g. if you want to do VGA passthrough. footnote:[Alex Williamson has a very good blog entry about this.
-http://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
-In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project http://www.tianocore.org/ovmf/]
+https://vfio.blogspot.co.at/2014/08/primary-graphics-assignment-without-vga.html]
+In such cases, you should rather use *OVMF*, which is an open-source UEFI implementation. footnote:[See the OVMF Project https://github.com/tianocore/tianocore.github.io/wiki/OVMF]
 
 If you want to use OVMF, there are several things to consider:
 
-- 
2.20.1





More information about the pve-devel mailing list