file

Simple upload

When you need to upload local files, pictures, videos and other resources with a size of no more than 5 GB to OSS, and the requirements for concurrent upload performance are not high, you can choose the simple upload method.

prerequisite

The storage space (bucket) has been created. For more information, see Create storage space

Restrictions on use

The file size for simple upload cannot exceed 5 GB. Please use the Fragment upload

matters needing attention

data security

Prevent overwriting objects with the same name

Simple upload will overwrite objects with the same name by default. You can select any of the following methods to prevent objects from being accidentally overwritten.

  • Turn on version control function

    After version control is enabled, the overwritten objects will be saved as historical versions. You can restore historical versions of objects at any time. For more information, see Introduction to version control

  • The upload request carries parameters that prohibit overwriting objects with the same name

    Carry parameters in the header of the upload request x-oss-forbid-overwrite And specify its value as true When an object with the same name exists in OSS, the object will fail to upload and return FileAlreadyExists Error. When this parameter is not carried or the value of this parameter is false The object with the same name will be overwritten.

Authorized upload

  • To prevent third-party users from uploading data in your bucket without authorization, OSS provides bucket and object level access control. For more information, see access control

  • Authorize the third-party user to upload the specified file using the signature URL. The signature URL allows third-party users to upload without security credentials or authorization. After a third-party user uploads a file using a signed URL, OSS will generate the file in the specified bucket. Upload a file using the file URL

Reduce the cost of PUT requests

If the number of files to be uploaded is large, directly specifying the file type to be uploaded as the deep cold archive type will result in higher PUT request fee It is recommended that you first specify the file storage type as standard storage for uploading, and then use the Life cycle rules Dump it to the deep cold archive type to reduce the PUT request fee.

Avoid affecting the OSS HDFS service

In order to avoid affecting the normal use of the OSS HDFS service or causing the risk of data loss, it is prohibited to store data in the OSS HDFS directory in a way other than the OSS HDFS service in the bucket that has the OSS HDFS service enabled .dlsdata/ Upload the object in.

Upload performance tuning

If you use a sequential prefix (such as timestamp or alphabetical order) in naming when uploading a large number of objects, it may occur that a large number of object index sets are stored in a specific partition of the storage space, resulting in a decrease in the request rate. It is recommended that when uploading a large number of objects, you do not use the object name with the order prefix, but change the order prefix to a random prefix. See OSS Performance and Scalability Best Practices

Operation steps

Using the OSS Console

explain

The OSS under the financial cloud has no public network region, and files cannot be uploaded through the console. Please upload through SDK, ossutil, ossbrowser, etc.

  1. Sign in OSS Management Console

  2. single click Bucket List , and then click the target bucket name.

  3. In the left navigation bar, select file management > File List

  4. stay File List Page, click Upload file

  5. stay Upload file Panel, configure the parameters as follows.

    1. Set basic options.

      parameter

      explain

      Upload to

      Set the storage path after the file is uploaded to the target bucket.

      • current directory : Upload the file to the current directory.

      • Specify directory : Upload the file to the specified directory. You need to enter the directory name. If the entered directory does not exist, OSS will automatically create a corresponding file directory and upload the file to the directory.

        The directory naming rules are as follows:

        • Please use qualified UTF-8 characters; The length must be between 1 and 254 characters.

        • A forward slash (/) or backslash () is not allowed.

        • Continuous forward slash (/) is not allowed.

        • The name is not allowed .. Directory of.

      File ACL

      Set the file read/write permission ACL.

      • Inherit Bucket : Bucket read/write permissions prevail.

      • private (Recommended): Only the file owner has read and write permissions for the file, and other users have no permissions to operate the file.

      • Public reading : The file owner has read and write permission to the file, and other users (including anonymous visitors) can access the file. This may result in the disclosure of your data and a sharp increase in costs. Please be careful.

      • Public read-write : Any user (including anonymous visitors) can access the file and write data to the file. This may result in the leakage of your data and a sharp increase in costs. If someone maliciously writes illegal information, it may also infringe your legitimate rights and interests. Except for special scenarios, it is not recommended to configure public read/write permissions.

      For more information about file ACLs, see Set Object ACL

      File to be uploaded

      Select the file or folder you want to upload.

      You can click Scan files or Scan Folders Select a local file or folder, or drag the target file or folder directly to the file area to be uploaded.

      If the upload folder contains files that do not need to be uploaded, click remove Move it out of the file list.

      important
      • When uploading a file in a version control bucket that is not enabled, if the uploaded file has the same name as the existing file, the existing file will be overwritten.

      • When uploading a file in a version control bucket that has been enabled, if the uploaded file has the same name as the existing file, the uploaded file will become the latest version, and the existing file will become the historical version.

    2. Optional: Set advanced options such as file storage type and encryption method.

      parameter

      explain

      Storage type

      Set the file storage type.

      • Inherit Bucket : Bucket storage type shall prevail.

      • Standard storage It provides highly reliable, highly available and high-performance object storage services, and can support frequent data access. It is applicable to various business scenarios such as social and sharing pictures, audio and video applications, large websites, big data analysis, etc.

      • Low frequency access storage Provide object storage services with high persistence and low storage cost. There are requirements for minimum measurement unit (64 KB) and minimum storage time (30 days). Support real-time data access, and data retrieval costs will be incurred when accessing data. It is applicable to business scenarios with low access frequency (average monthly access frequency is 1-2).

      • archival storage It provides object storage services with high persistence and very low storage costs. There are requirements for minimum measurement unit (64 KB) and minimum storage time (60 days). If data needs to be unfrozen (about 1 minute) and then accessed, data unfreezing fees will be incurred. If archive direct read is enabled, data can be accessed without unfreezing, and data direct read retrieval fees will be incurred. It is applicable to business scenarios where data is stored for a long time, such as archive data, medical images, scientific materials, film and television materials, etc.

      • Cold archive storage Provide object storage services with high persistence and lower storage cost than archive storage. There are requirements for minimum measurement unit (64 KB) and minimum storage time (180 days). The data needs to be unfrozen and then accessed. The unfreezing time depends on the data size and the selected unfreezing mode. The unfreezing will generate data retrieval fees and retrieval request fees. It is applicable to cold data that needs to be stored for a very long time, such as data that needs to be kept for a long time due to compliance requirements, big data and original data accumulated for a long time in the field of artificial intelligence, media resources retained for a long time in the film and television industry, archived videos in the online education industry and other business scenarios.

      • Deep Cold Archive Storage Provide object storage services with high persistence and lower cost than cold archive storage. There are requirements for minimum measurement unit (64 KB) and minimum storage time (180 days). The data needs to be unfrozen and then accessed. The unfreezing time depends on the data size and the selected unfreezing mode. The unfreezing will generate data retrieval fees and retrieval request fees. It is applicable to extremely cold data that needs to be stored for a very long time, such as the long-term accumulation and retention of big data and original data in the field of artificial intelligence, long-term retention of media data, regulatory and compliance archiving, tape replacement and other business scenarios.

      For more information, see Introduction to Storage Types

      Server encryption mode

      Set the server encryption mode of the file.

      • Inherit Bucket : The server-side encryption method of the bucket shall prevail.

      • Full OSS hosting : Use the OSS managed key for encryption. The OSS encrypts each object with a different key. As an additional protection, the OSS encrypts the encryption key itself with a master key that rotates periodically.

      • KMS : Use KMS default managed CMK or specified CMK ID for encryption and decryption operations. KMS corresponding encryption key The description is as follows:

        • alias/acs/oss : Use the default managed CMK to generate different keys to encrypt different objects, and automatically decrypt the objects when they are downloaded.

        • CMK ID: Use the specified CMK to generate different keys to encrypt different objects, and record the CMK ID of the encrypted object in the object's metadata. Users with decryption permission will automatically decrypt the object when downloading it. Before selecting the specified CMK ID, you need to KMS Management Console Create a common key in the same region as the bucket or External Key

      • encryption algorithm AES256 or SM4 encryption algorithm can be selected.

      User defined metadata

      It is used to add description information for an object. You can add multiple pieces of user-defined metadata, but the total size of all user-defined metadata cannot exceed 8 KB. When adding custom metadata, the parameter is required to x-oss-meta- Is the prefix and assigns a value to the parameter, for example x-oss-meta-location:hangzhou

    3. single click Upload file

      At this point, you can select Upload List Tab to view the upload progress of each file.

Using the graphical management tool ossbrowser

The ossbrowser supports bucket level operations similar to those supported by the console. Please follow the ossbrowser interface instructions to complete simple upload operations. For how to use ossbrowser, see Quickly use ossbrowser

Using the AliCloud SDK

The following is just a simple upload code example of common SDKs. For simple upload code examples of other SDKs, see SDK Introduction

 import com.aliyun.oss.ClientException; import com.aliyun.oss.OSS; import com.aliyun.oss.common.auth.*; import com.aliyun.oss.OSSClientBuilder; import com.aliyun.oss.OSSException; import com.aliyun.oss.model.PutObjectRequest; import com.aliyun.oss.model.PutObjectResult; import java.io.File; public class Demo { public static void main(String[] args) throws Exception { //Endpoint takes East China 1 (Hangzhou) as an example. Please fill in other regions according to the actual situation. String endpoint = " https://oss-cn-hangzhou.aliyuncs.com "; //Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set. EnvironmentVariableCredentialsProvider credentialsProvider = CredentialsProviderFactory.newEnvironmentVariableCredentialsProvider(); //Fill in the bucket name, such as examplebucket. String bucketName = "examplebucket"; //Fill in the full path of the object. The full path cannot contain bucket names, such as exampledir/exampleobject.txt. String objectName = "exampledir/exampleobject.txt"; //Fill in the full path of the local file, such as D:   localpath   examplefile.txt. //If no local path is specified, files will be uploaded from the corresponding local path of the project to which the sample program belongs by default. String filePath= "D:\\localpath\\examplefile.txt"; //Create an OSSClient instance. OSS ossClient = new OSSClientBuilder().build(endpoint,  credentialsProvider); try { //Create a PutObjectRequest object. PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, objectName, new File(filePath)); //If you need to set the storage type and access permissions when uploading, please refer to the following example code. // ObjectMetadata metadata = new ObjectMetadata(); // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString()); // metadata.setObjectAcl(CannedAccessControlList.Private); // putObjectRequest.setMetadata(metadata); //Upload files. PutObjectResult result = ossClient.putObject(putObjectRequest);            } catch (OSSException oe) { System.out.println("Caught an OSSException,  which means your request made it to OSS,  " + "but was rejected with an error response for some reason."); System.out.println("Error Message:" + oe.getErrorMessage()); System.out.println("Error Code:" + oe.getErrorCode()); System.out.println("Request ID:" + oe.getRequestId()); System.out.println("Host ID:" + oe.getHostId()); } catch (ClientException ce) { System.out.println("Caught an ClientException, which means the client encountered " + "a serious internal problem while trying to communicate with OSS, " + "such as not being able to access the network."); System.out.println("Error Message:" + ce.getMessage()); } finally { if (ossClient !=  null) { ossClient.shutdown(); } } } }
 <? php if (is_file(__DIR__ . '/../autoload.php')) { require_once __DIR__ . '/../autoload.php'; } if (is_file(__DIR__ . '/../vendor/autoload.php')) { require_once __DIR__ . '/../vendor/autoload.php'; } use OSS\Credentials\EnvironmentVariableCredentialsProvider; use OSS\OssClient; use OSS\Core\OssException; //Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set. $provider = new EnvironmentVariableCredentialsProvider(); //YourEndpoint fills in the endpoint corresponding to the bucket's region. Taking East China 1 (Hangzhou) as an example, the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com  $endpoint = "yourEndpoint"; //Fill in the bucket name, such as examplebucket. $bucket= "examplebucket"; //Fill in the full path of the object, such as exampledir/exampleobject.txt. Bucket names cannot be included in the full path of the object. $object = "exampledir/exampleobject.txt"; //Fill in the full path of the local file, such as D:   localpath   examplefile.txt. If no local path is specified, files will be uploaded from the corresponding local path of the project to which the sample program belongs by default. $filePath = "D:\\localpath\\examplefile.txt"; try{ $config = array( "provider" => $provider, "endpoint" => $endpoint, ); $ossClient = new OssClient($config); $ossClient->uploadFile($bucket, $object, $filePath); } catch(OssException $e) { printf(__FUNCTION__ .  ": FAILED\n"); printf($e->getMessage() .  "\n"); return; } print(__FUNCTION__ .  "OK" .  "\n");
 const OSS = require('ali-oss') const path=require("path") const client = new OSS({ //Yourregion Fill in the bucket region. Take East China 1 (Hangzhou) as an example, and fill in the region as oss cn hangzhou. region: 'yourregion', //Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set. accessKeyId: process.env.OSS_ACCESS_KEY_ID, accessKeySecret: process.env.OSS_ACCESS_KEY_SECRET, //Fill in the bucket name. bucket: 'examplebucket', }); //Custom Request Header const headers = { //Specifies the storage type of the object. 'x-oss-storage-class': 'Standard', //Specify the access permissions of the object. 'x-oss-object-acl': 'private', //When accessing a file through the file URL, specify to download the file as an attachment. The name of the downloaded file is defined as example.txt. 'Content-Disposition': 'attachment;  filename="example.txt"', //Set the label of the object. Multiple labels can be set at the same time. 'x-oss-tagging': 'Tag1=1&Tag2=2', //Specifies whether to overwrite the target object with the same name during the PutObject operation. Set to true here, which means that objects with the same name cannot be overwritten. 'x-oss-forbid-overwrite': 'true', }; async function put () { try { //Fill in the full path of the OSS file and the full path of the local file. The full path of the OSS file cannot contain bucket names. //If no local path is specified in the full path of the local file, the file will be uploaded from the corresponding local path of the project to which the sample program belongs by default. const result = await client.put('exampleobject.txt', path.normalize('D:\\localpath\\examplefile.txt') //Custom headers ,{headers} ); console.log(result); } catch (e) { console.log(e); } } put();
 # -*- coding: utf-8 -*- import oss2 import os from oss2.credentials import EnvironmentVariableCredentialsProvider #Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set. auth = oss2.ProviderAuth(EnvironmentVariableCredentialsProvider()) #YourEndpoint fills in the endpoint corresponding to the bucket's region. Taking East China 1 (Hangzhou) as an example, the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com  #Fill in the bucket name. bucket = oss2.Bucket(auth, ' https://oss-cn-hangzhou.aliyuncs.com ', 'examplebucket') #The file must be opened in binary mode. #Fill in the full path of the local file. If no local path is specified, files will be uploaded from the corresponding local path of the project to which the sample program belongs by default. with open('D:\\localpath\\examplefile.txt', 'rb') as fileobj: #The Seek method is used to specify the start of reading and writing from the 1000th byte position. The file will be uploaded from the 1000th byte position you specified until the end of the file. fileobj.seek(1000, os.SEEK_SET) #The Tell method is used to return the current position. current = fileobj.tell() #Fill in the full path of the object. Bucket names cannot be included in the full path of the object. bucket.put_object('exampleobject.txt', fileobj)
 <! DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>Document</title> </head> <body> <input id="file" type="file" /> <button id="upload">Upload <script src=" https://gosspublic.alicdn.com/aliyun-oss-sdk-6.18.0.min.js "></script> <script> const client = new OSS({ //YourRegion is the region where the bucket is located. Take East China 1 (Hangzhou) as an example. Your Region is oss cn hangzhou. region: "yourRegion", //The temporary access key (AccessKey ID and AccessKey Secret) obtained from the STS service. accessKeyId: "yourAccessKeyId", accessKeySecret: "yourAccessKeySecret", //The security token (SecurityToken) obtained from the STS service. stsToken: "yourSecurityToken", //Fill in the bucket name. bucket: "examplebucket", }); //Get the file object from the input box, for example,<input type="file" id="file"/>. let data; //Create and fill in blob data. //const data = new Blob(['Hello OSS']); //Create and fill in OSS Buffer content. //const data = new OSS. Buffer(['Hello OSS']); const upload = document.getElementById("upload"); async function putObject(data) { try { //Fill in the full path of the object. Bucket names cannot be included in the full path of the object. //You can upload data to the current bucket or the specified directory in the bucket by using a user-defined file name (e.g. exampleobject. txt) or the full path of the file (e.g. exampledir/exampleobject. txt). //The data object can be customized as a file object, blob data, or OSS Buffer. const options = { meta: { temp: "demo" }, mime: "json", headers: { "Content-Type": "text/plain" }, }; const result = await client.put("examplefile.txt", data, options); console.log(result); } catch (e) { console.log(e); } } upload.addEventListener("click", () => { const data = file.files[0]; putObject(data); }); </script> </body> </html>
 using Aliyun.OSS; //YourEndpoint fills in the endpoint corresponding to the bucket's region. Taking East China 1 (Hangzhou) as an example, the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com  var endpoint = "yourEndpoint"; //Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set. var accessKeyId = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_ID"); var accessKeySecret = Environment.GetEnvironmentVariable("OSS_ACCESS_KEY_SECRET"); //Fill in the bucket name, such as examplebucket. var bucketName = "examplebucket"; //Fill in the full path of the object. The full path cannot contain bucket names, such as exampledir/exampleobject.txt. var objectName = "exampledir/exampleobject.txt"; //Fill in the full path of the local file. If no local path is specified, files will be uploaded from the corresponding local path of the project to which the sample program belongs by default. var localFilename = "D:\\localpath\\examplefile.txt"; //Create an OssClient instance. var client = new OssClient(endpoint,  accessKeyId, accessKeySecret); try { //Upload files. client.PutObject(bucketName,  objectName, localFilename); Console.WriteLine("Put object succeeded"); } catch (Exception ex) { Console.WriteLine("Put object failed, {0}", ex.Message); }
 //Construct the upload request. //Fill in the bucket name (for example, examplebucket), the full path of the object (for example, exampledir/exampleobject. txt), and the full path of the local file (for example,/storage/simulated/0/oss/examplefile. txt). //Bucket names cannot be included in the full path of the object. PutObjectRequest put = new PutObjectRequest("examplebucket", "exampledir/exampleobject.txt", "/storage/emulated/0/oss/examplefile.txt"); //Set file metadata as an optional operation. ObjectMetadata metadata = new ObjectMetadata(); // metadata.setContentType("application/octet-stream"); //  Set content type. // metadata.setContentMD5(BinaryUtil.calculateBase64Md5(uploadFilePath)); //  Verify MD5. //Set the access permission of the object to private. metadata.setHeader("x-oss-object-acl", "private"); //Set the storage type of the object to standard storage. metadata.setHeader("x-oss-storage-class", "Standard"); //Set to disable overwriting objects with the same name. // metadata.setHeader("x-oss-forbid-overwrite", "true"); //Specifies the object label of the object. Multiple labels can be set at the same time. // metadata.setHeader("x-oss-tagging", "a:1"); //Specifies the server-side encryption algorithm used by OSS when creating the target object. // metadata.setHeader("x-oss-server-side-encryption", "AES256"); //Indicates the user master key managed by KMS. This parameter is only valid when x-oss-server-side-encryption is KMS. // metadata.setHeader("x-oss-server-side-encryption-key-id", "9468da86-3509-4f8d-a61e-6eab1eac****"); put.setMetadata(metadata); try { PutObjectResult putResult = oss.putObject(put); Log.d("PutObject", "UploadSuccess"); Log.d("ETag", putResult.getETag()); Log.d("RequestId", putResult.getRequestId()); } catch (ClientException e) { //Client exceptions, such as network exceptions. e.printStackTrace(); } catch (ServiceException e) { //Server side exception. Log.e("RequestId", e.getRequestId()); Log.e("ErrorCode", e.getErrorCode()); Log.e("HostId", e.getHostId()); Log.e("RawMessage", e.getRawMessage()); }
 package main import ( "fmt" "os" "github.com/aliyun/aliyun-oss-go-sdk/oss" ) func main() { //Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set. provider, err := oss.NewEnvironmentVariableCredentialsProvider() if err != nil { fmt.Println("Error:", err) os.Exit(-1) } //Create an OSSClient instance. //YourEndpoint fills in the endpoint corresponding to the bucket. Take East China 1 (Hangzhou) as an example, fill in as https://oss-cn-hangzhou.aliyuncs.com  Please fill in other regions according to the actual situation. client, err := oss.New("yourEndpoint", "", "", oss.SetCredentialsProvider(&provider))     if err != nil { fmt.Println("Error:", err) os.Exit(-1) } //Fill in the name of the storage space, such as examplebucket. bucket, err := client.Bucket("examplebucket") if err != nil { fmt.Println("Error:", err) os.Exit(-1) } //Fill in the full path of the object (for example, exampledir/exampleobject. txt) and the full path of the local file (for example, D:   localpath   examplefile. txt). err = bucket.PutObjectFromFile("exampledir/exampleobject.txt", "D:\\localpath\\examplefile.txt") if err != nil { fmt.Println("Error:", err) os.Exit(-1) } }
 OSSPutObjectRequest * put = [OSSPutObjectRequest new]; //Fill in the bucket name, such as examplebucket. put.bucketName = @"examplebucket"; //Fill in the full path of the file, such as exampledir/exampleobject.txt. Bucket names cannot be included in the full path of the object. put.objectKey = @"exampledir/exampleobject.txt"; put.uploadingFileURL = [NSURL fileURLWithPath:@"<filePath>"]; // put.uploadingData = <NSData *>; //  Upload NSData directly. //(Optional) Set the upload progress. put.uploadProgress = ^(int64_t bytesSent, int64_t totalByteSent, int64_t totalBytesExpectedToSend) { //Specify the current upload length, the total uploaded length, and the total length to be uploaded. NSLog(@"%lld, %lld, %lld",  bytesSent, totalByteSent, totalBytesExpectedToSend); }; //Configure optional fields. // put.contentType = @"application/octet-stream"; //Set Content-MD5. // put.contentMd5 = @"eB5eJF1ptWaXm4bijSPyxw=="; //Set the object encoding method. // put.contentEncoding = @"identity"; //Set the display form of the object. // put.contentDisposition = @"attachment"; //You can set file metadata or HTTP headers when uploading files. // NSMutableDictionary *meta = [NSMutableDictionary dictionary]; //Set file metadata. // [meta setObject:@"value" forKey:@"x-oss-meta-name1"]; //Set the access permission of the object to private. // [meta setObject:@"private" forKey:@"x-oss-object-acl"]; //Set the object archive type to standard storage. // [meta setObject:@"Standard" forKey:@"x-oss-storage-class"]; //Set to overwrite the target object with the same name. // [meta setObject:@"true" forKey:@"x-oss-forbid-overwrite"]; //Specifies the object label of the object. Multiple labels can be set at the same time. // [meta setObject:@"a:1" forKey:@"x-oss-tagging"]; //Specifies the server-side encryption algorithm used by OSS when creating the target object. // [meta setObject:@"AES256" forKey:@"x-oss-server-side-encryption"]; //Indicates the user master key managed by KMS. This parameter is only valid when x-oss-server-side-encryption is KMS. // [meta setObject:@"9468da86-3509-4f8d-a61e-6eab1eac****" forKey:@"x-oss-server-side-encryption-key-id"]; // put.objectMeta = meta; OSSTask * putTask = [client putObject:put]; [putTask continueWithBlock:^id(OSSTask *task) { if (! task.error) { NSLog(@"upload object success!"); } else { NSLog(@"upload object failed, error: %@" , task.error); } return nil; }]; //WaitUntilFinished will block the current thread, but not the upload task process. // [putTask waitUntilFinished]; // [put cancel];
 #include <alibabacloud/oss/OssClient.h> #include <fstream> using namespace AlibabaCloud::OSS; int main(void) { /*Initialize OSS account information*/ /*YourEndpoint fills in the endpoint corresponding to the bucket's region. Taking East China 1 (Hangzhou) as an example, the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com 。*/ std::string Endpoint = "yourEndpoint"; /*Fill in the bucket name, such as examplebucket*/ std::string BucketName = "examplebucket"; /*Fill in the full path of the object. The full path cannot contain bucket names, such as exampledir/exampleobject.txt*/ std::string ObjectName = "exampledir/exampleobject.txt"; /*Initialize network and other resources*/ InitializeSdk(); ClientConfiguration conf; /*Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set*/ auto credentialsProvider = std::make_shared<EnvironmentVariableCredentialsProvider>(); OssClient client(Endpoint,  credentialsProvider, conf); /*Fill in the full path of the local file, such as D:   localpath   examplefile.txt, where localpath is the local path of the local file examplefile.txt*/ std::shared_ptr<std::iostream> content = std::make_shared<std::fstream>("D:\\localpath\\examplefile.txt",  std::ios::in | std::ios::binary); PutObjectRequest request(BucketName,  ObjectName, content); /*(Optional) See the following example to set the access ACL to private and the storage type to Standard*/ //request. MetaData().addHeader("x-oss-object-acl", "private"); //request. MetaData().addHeader("x-oss-storage-class", "Standard"); auto outcome = client.PutObject(request); if (! outcome.isSuccess()) { /*Exception handling*/ std::cout << "PutObject fail" << ",code:" << outcome.error(). Code() << ",message:" << outcome.error(). Message() << ",requestId:" << outcome.error(). RequestId() << std::endl; return -1; } /*Release network and other resources*/ ShutdownSdk(); return 0; }
 #include "oss_api.h" #include "aos_http_io.h" /*YourEndpoint fills in the endpoint corresponding to the bucket's region. Taking East China 1 (Hangzhou) as an example, the Endpoint is filled in as https://oss-cn-hangzhou.aliyuncs.com 。*/ const char *endpoint = "yourEndpoint"; /*Fill in the bucket name, such as examplebucket*/ const char *bucket_name = "examplebucket"; /*Fill in the full path of the object. The full path cannot contain bucket names, such as exampledir/exampleobject.txt*/ const char *object_name = "exampledir/exampleobject.txt"; const char *object_content = "More than just cloud. "; void init_options(oss_request_options_t *options) { options->config = oss_config_create(options->pool); /*Initialize the aos_string_t type with a string of char * type*/ aos_str_set(&options->config->endpoint, endpoint); /*Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set*/ aos_str_set(&options->config->access_key_id, getenv("OSS_ACCESS_KEY_ID")); aos_str_set(&options->config->access_key_secret, getenv("OSS_ACCESS_KEY_SECRET")); /*Whether CNAME is used. 0 means not used*/ options->config->is_cname = 0; /*Set network related parameters, such as timeout*/ options->ctl = aos_http_controller_create(options->pool, 0); } int main(int argc, char *argv[]) { /*Call the aos_http_io_initialize method at the program entrance to initialize global resources such as network and memory*/ if (aos_http_io_initialize(NULL, 0) !=  AOSE_OK) { exit(1); } /*The memory pool used for memory management is equivalent to apr_pool_t. In fact, modern codes are in the apr library*/ aos_pool_t *pool; /*Recreate a memory pool. The second parameter is NULL, which means no other memory pool is inherited*/ aos_pool_create(&pool, NULL); /*Create and initialize options. This parameter includes global configuration information such as endpoint, access_key_id, access_key_secret, is_cname, curl, etc*/ oss_request_options_t *oss_client_options; /*Allocate memory to options in the memory pool*/ oss_client_options = oss_request_options_create(pool); /*Initialize the option oss_client_options of the client*/ init_options(oss_client_options); /*Initialization parameters*/ aos_string_t bucket; aos_string_t object; aos_list_t buffer; aos_buf_t *content = NULL; aos_table_t *headers = NULL; aos_table_t *resp_headers = NULL;  aos_status_t *resp_status = NULL;  aos_str_set(&bucket, bucket_name); aos_str_set(&object, object_name); aos_list_init(&buffer); content = aos_buf_pack(oss_client_options->pool, object_content, strlen(object_content)); aos_list_add_tail(&content->node, &buffer); /*Upload files*/ resp_status = oss_put_object_from_buffer(oss_client_options, &bucket, &object, &buffer,  headers, &resp_headers); /*Determine whether the upload is successful*/ if (aos_status_is_ok(resp_status)) { printf("put object from buffer succeeded\n"); } else { printf("put object from buffer failed\n");       } /*Releasing the memory pool is equivalent to releasing the memory allocated by each resource during the request process*/ aos_pool_destroy(pool); /*Release previously allocated global resources*/ aos_http_io_deinitialize(); return 0; }
 require 'aliyun/oss' client = Aliyun::OSS::Client.new( #Endpoint takes East China 1 (Hangzhou) as an example. Please fill in other regions according to the actual situation. endpoint: ' https://oss-cn-hangzhou.aliyuncs.com ', #Get access credentials from environment variables. Before running this code example, make sure that the environment variables OSS_ACCESS_KEY_ID and OSS_ACCESS_KEY_SECRET have been set. access_key_id: ENV['OSS_ACCESS_KEY_ID'], access_key_secret: ENV['OSS_ACCESS_KEY_SECRET'] ) #Fill in the bucket name, such as examplebucket. bucket = client.get_bucket('examplebucket') #Upload files. bucket.put_object('exampleobject.txt', :file => 'D:\\localpath\\examplefile.txt')

Using the command line tool ossutil

For specific operations of simple upload using ossutil, see Simple upload

Use REST API

If your program has high customization requirements, you can directly initiate REST API requests. To directly initiate a REST API request, you need to manually write code to calculate the signature. For more information, see PutObject

Related Documents

  • It is recommended to upload files to OSS by direct client transmission. Compared with the server proxy upload, the client direct transfer avoids the business server transferring files, improves the upload speed, and saves server resources. For more information, see Client direct transmission

  • In the case of simple upload, you can carry the object meta information to describe the object, such as setting standard HTTP headers such as Content Type, or setting user-defined information. For more information about Object Meta, see Set file metadata

  • After the file is uploaded to OSS, you can send a callback request to the specified application server through the upload callback. See Upload callback

  • If you want to compress the uploaded pictures, add custom styles, etc. For more information, see Operation mode of image processing

  • If you need to obtain the image size information after uploading the image, you can use ? x-oss-process=image/info Returns the basic information of the picture. For more information, see pick up information

  • If you want to perform text recognition, caption extraction, video transcoding, and video cover generation on uploaded pictures or videos, see Media processing

  • If you want to preview or edit the uploaded documents in PDF, PPT, Word and other formats online, see WebOffice preview and collaborative editing

  • After the file is uploaded, you can add signature information to the URL so that the URL can be transferred to a third party for authorized access. For more information, see Include signature in URL

  • The behavior of preview or download when accessing a file through the file URL depends on the domain name type and bucket creation time. For more information, see Access the file through the file URL, and the file cannot be previewed but downloaded as an attachment?

  • When running batch jobs using Hadoop, Spark, etc., you can select the object storage OSS as the storage. After uploading files, you can access OSS data in ECI. For more information, see Access OSS data in ECI

  • Introduction to this page (1)